Skip to main content

Get Started with Authentication

To use the Typecast API, you’ll need to authenticate your requests with an API key. Follow these steps:
1

First Step

Visit your Typecast Dashboard to generate a new API key
2

Second Step

Keep your API key secure - we recommend storing it as an environment variable

Make your first request

1

Install SDK (Python or Javascript)

pip install --upgrade typecast-python
Both SDKs require version 0.1.5 or higher.
  • Python: If you have an older version, upgrade with pip install --upgrade typecast-python
  • Javascript: If you have an older version, upgrade with npm update @neosapience/typecast-js
2

Import and Initialize

from typecast import Typecast
from typecast.models import TTSRequest, SmartPrompt

# Initialize client
client = Typecast(api_key="YOUR_API_KEY")

# Convert text to speech
response = client.text_to_speech(TTSRequest(
    text="Everything is going to be okay.",
    model="ssfm-v30",
    voice_id="tc_672c5f5ce59fac2a48faeaee",
    prompt=SmartPrompt(
        emotion_type="smart",
        previous_text="I just got the best news!",
        next_text="I can't wait to celebrate!"
    )
))

# Save audio file
with open('typecast.wav', 'wb') as f:
    f.write(response.audio_data)
To browse and select available voice IDs for your requests, please refer to Listing all voices in our API Reference.

List all voices

To use Typecast effectively, you need access to voice IDs. The /v2/voices endpoint provides a complete list of available voices with their unique identifiers, names, supported models, and emotions. You can filter voices by model, gender, age, and use cases using optional query parameters.
You can explore our complete voice catalog in more detail on the Voices page, where you’ll find additional information about each voice’s characteristics, sample audio clips, and recommended use cases.
from typecast import Typecast
from typecast.models import VoicesV2Filter, TTSModel

# Initialize client
client = Typecast(api_key="YOUR_API_KEY")

# Get all voices (optionally filter by model, gender, age, use_cases)
voices = client.voices_v2(VoicesV2Filter(model=TTSModel.SSFM_V30))

print(f"Found {len(voices)} voices:")
for voice in voices:
    for model in voice.models:
        print(f"ID: {voice.voice_id}, Name: {voice.voice_name}, Model: {model.version.value}, Emotions: {', '.join(model.emotions)}")
The response will be a JSON array of voice objects, each containing:
{
  "voice_id": "tc_672c5f5ce59fac2a48faeaee",
  "voice_name": "Dylan",
  "models": [
    {
      "version": "ssfm-v30",
      "emotions": ["normal", "happy", "sad", "angry", "whisper", "toneup", "tonedown"]
    }
  ],
  "gender": "male",
  "age": "young_adult",
  "use_cases": ["Conversational", "TikTok/Reels/Shorts", "Audiobook/Storytelling"]
}
You’ll need a valid voice ID when making text-to-speech requests. With ssfm-v30, all 7 emotion presets are available across all voices.

Next steps

Congratulations on creating your first AI voice! Here are some resources to help you dive deeper: