$0+

๐Ÿš€ Automate Long YouTube Video to Multiple Shorts with Custom Captions & Scheduling! (Free n8n Workflow)

I want this!

๐Ÿš€ Automate Long YouTube Video to Multiple Shorts with Custom Captions & Scheduling! (Free n8n Workflow)

$0+

Okay, excellent! Using Gumroad for free distribution is a great way to track downloads and potentially build an audience.

Here's the revised Reddit post draft, incorporating the Gumroad link:


Title: ๐Ÿš€ Automate Long YouTube Video to Multiple Shorts with Custom Captions & Scheduling! (Free n8n Workflow via Gumroad)

Hey fellow n8n automators!

Like many creators, I was spending way too much time manually clipping interesting segments from my long videos, adding captions, branding, generating titles/descriptions, and scheduling them as YouTube Shorts. To save time and streamline the process, I built an n8n workflow, and I'm sharing it today for free on Gumroad!

This workflow takes a YouTube video ID, uses Swiftia.io to automatically identify and extract potential short clips, lets you apply custom caption styling/branding, generates optimized metadata (title, description, tags, category) using your choice of Large Language Model (LLM), and then uploads and schedules the shorts directly to your YouTube channel.

How it Works (High-Level):

  1. Trigger: Starts with an n8n Form where you input the YouTube Video ID, desired first publication date, interval between shorts, and optional caption styling.
  2. Clip Generation: Calls the Swiftia API to analyze the video and generate potential short clips based on interesting moments.
  3. Wait & Check: Waits for Swiftia to complete the job.
  4. Split & Schedule: Splits the results into individual potential shorts and calculates the publication date for each based on your interval.
  5. Loop & Process: Loops through each potential short (up to a limit you can set, default is 10).
  6. Render: Calls Swiftia again to render the short with your specified caption styling/branding (if provided).
  7. Wait & Check Render: Waits for the rendering to complete.
  8. Generate Metadata (LLM): Feeds the short's transcript and context to an LLM (via n8n's LangChain nodes) to generate an optimized title, description, tags, and YouTube category ID.
  9. YouTube Upload: Downloads the rendered short and uploads it to YouTube using the resumable upload API, setting the generated metadata and the calculated publication schedule.
  10. Respond: Finally responds to the initial Form trigger confirming the process started/completed (can be customized).

Who is this for?

  • Podcasters wanting to repurpose video podcast segments.
  • YouTube creators looking to efficiently create Shorts from existing content.
  • Marketers needing to generate short-form video content quickly.
  • Anyone tired of the manual Short creation grind!

Prerequisites - What You'll Need:

  • n8n Instance: Self-hosted or Cloud.
    • [Self-Hosted Heads-Up!] Processing videos uses memory! If you hit errors, either increase RAM for n8n or set the environment variable N8N_DEFAULT_BINARY_DATA_MODE=filesystem (make sure you have enough disk space!).
  • Swiftia Account & API Key: For the video analysis and rendering (https://swiftia.io/).
  • Google Account & YouTube Channel: The channel where shorts will be uploaded.
  • Google Cloud Platform (GCP) Project:
    • YouTube Data API v3 Enabled.
    • OAuth 2.0 Credentials (Client ID & Secret) for the YouTube API.
  • LLM Provider Account & API Key: Your choice! (e.g., OpenAI, Google Gemini, Groq, Anthropic via OpenRouter, etc.). The template uses Gemini, but you can swap the node.
  • n8n LangChain Nodes: Ensure @n8n/n8n-nodes-langchain (or similar) is installed if needed.
  • (Optional) Caption Styling JSON: If you want custom captions/branding, get the JSON/preset from Swiftia's playground.

Setup Instructions:

  1. Download: Get the workflow .json file for free from the Gumroad link below.
  2. Import: Import the workflow into your n8n instance.
  3. Create n8n Credentials:
    • Swiftia: Create a "Header Auth" credential (or be prepared to paste the API key directly in the HTTP nodes marked <YOUR_SWIFTIA_API_KEY>).
    • YouTube: Create a "YouTube OAuth2 API" credential using your GCP OAuth details and authenticate it. Note its name/ID.
    • LLM Provider: Create the appropriate credential for your chosen LLM provider (e.g., "OpenAI API", "Google Gemini API") using your LLM API key. Note its name/ID.
  4. Configure Credentials in Workflow:
    • Go through the nodes (especially HTTP Request, YouTube nodes, LLM node) and select the credentials you just created from the dropdowns, replacing placeholders like <YOUR_YOUTUBE_CREDENTIAL_NAME> etc.
    • IMPORTANT - LLM Node: The template uses the "Google Gemini Chat Model". If you're using a different provider (like OpenAI), DELETE the Gemini node and ADD the correct chat model node (e.g., "OpenAI Chat Model"). Connect it appropriately within the LangChain section (it takes input from current_item_ref and connects to generatingMetaData and Auto-fixing Output Parser). Select your LLM credential in the new node.
  5. Review Placeholders: Double-check all nodes (especially HTTP Requests) for any remaining placeholder values (like API keys in headers if you didn't use Header Auth) and replace them.

Running the Workflow:

  1. Activate the workflow.
  2. Navigate to the URL provided by the "n8n Form Trigger" node.
  3. Fill in the required fields: YouTube Video ID, First Publication Date (e.g., 2025-05-10T08:00:00Z), Interval (hours), and optionally the Caption Styling JSON.
  4. Submit the form and let n8n do the work!

Important Notes:

  • โš ๏ธ API Keys are Sensitive: Make sure you've replaced all placeholder API keys and credential IDs before activating.
  • ๐Ÿ’ฐ Costs: Swiftia, YouTube API (beyond free quotas), and your LLM provider may have associated costs based on usage. Check their pricing.
  • ๐Ÿงช Test First: I strongly recommend setting the privacyStatus in the setupMetaData node to private initially for testing, instead of public or unlisted with a publishAt.
  • โš™๏ธ Customize: Feel free to tweak the LLM prompts, the number of shorts generated (maxShortsnumber node), error handling, etc.

if you prefer downloading the workflow from github, here is the link

$
I want this!
Size
34.1 KB