Skip to main content

Overview

Echoes of the Past uses a feature-based architecture to organize code by domain functionality rather than technical layers. Each feature encapsulates its own components, hooks, actions, and utilities, making the codebase scalable and maintainable.

Feature Directory Structure

The application is organized into self-contained feature modules located in features/:
features/
├── analytics/          # User feedback and analytics
   ├── components/
   └── hooks/
├── auth/              # Authentication flows
   └── components/
├── call/              # Voice call interface
   ├── actions.ts
   ├── components/
   ├── hooks/
   ├── lib/
   └── types.ts
├── character/         # Historical figure management
   ├── actions.ts
   ├── components/
   └── hooks/
├── dashboard/         # User dashboard
├── landing-page/      # Marketing pages
├── quiz/              # Quiz functionality
   ├── actions.ts
   └── components/
└── sidebar/           # Navigation sidebar

Core Architecture Patterns

Actions Pattern (Server-Side Logic)

Each feature contains an actions.ts file with server-side functions marked with 'use server'. These handle database operations, API calls, and business logic.
features/character/actions.ts
'use server'

import { createClient } from '@/utils/supabase/server'
import { elevenlabs } from '@/lib/elevenlabs'
import redis from '@/lib/redis'

export async function addCharacter(data: CharacterFormValues) {
  const supabase = await createClient()
  const { data: user } = await supabase.auth.getUser()

  // Rate limiting
  const rateLimit = await redis.incr(`character-rate-limit:${user.user.id}`)
  if (rateLimit > 10) {
    throw new Error('Maximum characters per day reached')
  }

  // Insert character data
  const { data: character, error } = await supabase
    .from('historicalFigures')
    .insert({
      name: validatedData.name,
      imageUrl: validatedData.imageUrl,
      description: validatedData.description,
      category: validatedData.category,
      voiceId: validatedData.voiceId,
    })
    .select('id')
    .single()

  return character.id
}

export const getCharacter = async (id: string) => {
  const supabase = await createClient()
  const { data } = await supabase
    .from('historicalFigures')
    .select('*')
    .eq('id', id)
    .single()
  return data
}

Components Pattern (UI Layer)

Components are organized by feature and follow Next.js App Router conventions:
Server Components (default in Next.js 13+):
  • No 'use client' directive
  • Can directly access server-side resources
  • Better performance, smaller bundles
  • Used for static content, data fetching
Client Components:
  • Must include 'use client' at top of file
  • Required for interactivity (hooks, state, events)
  • Used for forms, real-time features, browser APIs
features/call/components/call-interface.tsx
'use client'

import { useEffect, useState } from 'react'
import { useVapi } from '../hooks/useVapi'

export const CallInterface = ({ character, systemPrompt, firstMessage }) => {
  const [callDuration, setCallDuration] = useState(0)
  const { toggleCall, callStatus, audioLevel, messages } = useVapi({
    character,
    systemPrompt,
    firstMessage
  })

  // Start call on mount
  useEffect(() => {
    toggleCall()
  }, [])

  // Track call duration
  useEffect(() => {
    let interval: NodeJS.Timeout
    if (callStatus === CALL_STATUS.ACTIVE) {
      interval = setInterval(() => {
        setCallDuration((prev) => prev + 1)
      }, 1000)
    }
    return () => clearInterval(interval)
  }, [callStatus])

  return (
    <div className="min-h-screen">
      <Avatar className="w-32 h-32">
        <AvatarImage src={character.imageUrl} />
      </Avatar>
      <h1>{character.name}</h1>
      <Siri audioLevel={audioLevel} callStatus={callStatus} />
      <AssistantButton onClick={toggleCall} callStatus={callStatus} />
    </div>
  )
}

Hooks Pattern (Reusable Logic)

Custom hooks encapsulate feature-specific logic and state management:
features/call/hooks/useVapi.ts
'use client'

import { useEffect, useState } from 'react'
import { vapi } from '@/lib/vapi'
import { Message, TranscriptMessage } from '@/types/conversation.type'

export function useVapi({ character, systemPrompt, firstMessage }) {
  const [isSpeechActive, setIsSpeechActive] = useState(false)
  const [callStatus, setCallStatus] = useState(CALL_STATUS.INACTIVE)
  const [messages, setMessages] = useState<Message[]>([])
  const [activeTranscript, setActiveTranscript] = useState<TranscriptMessage | null>(null)
  const [audioLevel, setAudioLevel] = useState(0)

  // Configure Vapi assistant
  const assistant = {
    name: character.name,
    firstMessage,
    model: {
      provider: 'openai',
      model: 'gpt-3.5-turbo',
      temperature: 0.7,
      messages: [{ role: 'system', content: systemPrompt }]
    },
    voice: {
      provider: '11labs',
      voiceId: character.voiceId,
      stability: 0.4,
      similarityBoost: 0.8,
    }
  }

  useEffect(() => {
    // Event handlers
    const onSpeechStart = () => setIsSpeechActive(true)
    const onSpeechEnd = () => setIsSpeechActive(false)
    const onCallStart = () => setCallStatus(CALL_STATUS.ACTIVE)
    const onCallEnd = () => setCallStatus(CALL_STATUS.INACTIVE)
    const onVolumeLevel = (volume: number) => setAudioLevel(volume)
    
    const onMessageUpdate = (message: Message) => {
      if (message.transcriptType === 'partial') {
        setActiveTranscript(message)
      } else {
        setMessages(prev => [...prev, message])
        setActiveTranscript(null)
      }
    }

    // Register listeners
    vapi.on('speech-start', onSpeechStart)
    vapi.on('speech-end', onSpeechEnd)
    vapi.on('call-start', onCallStart)
    vapi.on('call-end', onCallEnd)
    vapi.on('volume-level', onVolumeLevel)
    vapi.on('message', onMessageUpdate)

    // Cleanup
    return () => {
      vapi.off('speech-start', onSpeechStart)
      vapi.off('speech-end', onSpeechEnd)
      // ... other cleanup
    }
  }, [])

  const start = async () => {
    setCallStatus(CALL_STATUS.LOADING)
    vapi.start(assistant)
  }

  const stop = () => {
    setCallStatus(CALL_STATUS.LOADING)
    vapi.stop()
  }

  return {
    isSpeechActive,
    callStatus,
    audioLevel,
    activeTranscript,
    messages,
    start,
    stop,
    toggleCall: () => callStatus === CALL_STATUS.ACTIVE ? stop() : start()
  }
}

Lib Pattern (Utilities)

Each feature can have a lib/ directory for feature-specific utilities:
features/call/lib/generate-feedback.ts
import { openai } from '@/lib/ai'
import { Message } from '@/types/conversation.type'

export async function generateFeedback(messages: Message[]) {
  const transcript = messages
    .filter(m => m.type === 'transcript')
    .map(m => `${m.role}: ${m.transcript}`)
    .join('\n')

  const completion = await openai.chat.completions.create({
    messages: [
      {
        role: 'system',
        content: 'Analyze the conversation and provide feedback'
      },
      { role: 'user', content: transcript }
    ],
    model: 'gpt-4-turbo-preview'
  })

  return completion.choices[0].message.content
}

Type Definitions

Features define their own types in dedicated types.ts files:
features/call/types.ts
export type VapiCallProps = {
  systemPrompt: string
  firstMessage: string
}

export enum CALL_STATUS {
  INACTIVE = 'inactive',
  ACTIVE = 'active',
  LOADING = 'loading',
}

export type TranscriptMessage = {
  speaker: 'user' | 'assistant'
  transcript: string
}

Best Practices

Create a new feature module when:
  • Functionality is logically independent
  • Multiple related components/hooks are needed
  • Feature has its own server actions
  • Code reuse across multiple pages is expected
Example: The call feature includes call interface, transcription, feedback, and related utilities.
Within components/, organize by:
  • Specificity: More specific components in subdirectories
  • Reusability: Shared components at root level
  • Naming: Use descriptive, feature-specific names
features/call/components/
├── call-interface.tsx       # Main component
├── assistantButton.tsx      # Reusable
├── siri.tsx                 # Visual effects
├── TranscriptView.tsx       # Sub-component
└── feedback-button.tsx      # Feature-specific
Use Server Actions (actions.ts) when:
  • Direct database operations
  • Form submissions
  • Server-side mutations
  • Type-safe data fetching
Use API Routes (/app/api/) when:
  • Webhooks from external services
  • Public APIs
  • Non-Next.js clients
  • Complex streaming responses

Feature Integration Example

Here’s how features work together in a page:
app/app/call/[id]/page.tsx
import { getCharacter } from '@/features/character/actions'
import { CallInterface } from '@/features/call/components/call-interface'
import { generateCallPrompt, generateCallFirstMessage } from '@/lib/prompt'

export default async function CallPage({ params }: { params: { id: string } }) {
  // Server-side data fetching
  const character = await getCharacter(params.id)
  
  // Generate prompts using shared utilities
  const systemPrompt = generateCallPrompt(character)
  const firstMessage = generateCallFirstMessage(character)

  // Render client component with data
  return (
    <CallInterface
      character={character}
      systemPrompt={systemPrompt}
      firstMessage={firstMessage}
      backHref="/app"
    />
  )
}

Build docs developers (and LLMs) love