Skip to main content

Recordings

Verriflo can record your live classes for on-demand viewing later. Here's how to work with recordings from start to finish.

Starting a Recording

From the Classroom UI

The easiest way: teachers click the "Record" button in the classroom controls. A red indicator shows when recording is active.

Recording Indicators

When a class is being recorded:

  • Teacher sees a red "Recording" badge and timer
  • Students see a subtle indicator that the class is being recorded
  • This is intentional—students should know they're being recorded

Recording Format

Recordings are saved as:

PropertyValue
FormatMP4 (H.264 video, AAC audio)
ResolutionUp to 1080p (matches instructor quality)
Frame rate30 FPS
AudioStereo, 128kbps AAC

The recording captures the composite view—exactly what students see, including screen shares.

The 6-Hour Window

Critical Timing

Recordings are available for download for 6 hours after a class ends. After that, they're automatically deleted.

This is a hard limit. We can't recover expired recordings.

Why the limit? Storage costs add up. If you need permanent storage, download to your own infrastructure.

Downloading Recordings

API Endpoint

Call the Download Recordings API:

curl -X GET https://api.verriflo.com/v1/room/your-room-id/recordings/download \
-H "VF-ORG-ID: your-organization-id"

Response:

{
"success": true,
"data": {
"roomId": "your-room-id",
"files": [
{
"filename": "recording-2024-01-15-10-30-00.mp4",
"downloadUrl": "https://storage.verriflo.com/...",
"size": 157286400,
"duration": 3600
}
],
"expiresIn": "4h 23m"
}
}

Download the File

The downloadUrl is a pre-signed URL that's valid for about an hour. Download directly:

const fs = require("fs");
const https = require("https");

function downloadRecording(url, filename) {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(filename);

https
.get(url, (response) => {
response.pipe(file);
file.on("finish", () => {
file.close();
resolve();
});
})
.on("error", (err) => {
fs.unlink(filename, () => {}); // Delete partial file
reject(err);
});
});
}

// Usage
await downloadRecording(
data.files[0].downloadUrl,
"./recordings/class-123.mp4"
);

Automating Downloads

Since recordings expire, you'll want to automate the download process.

Option 1: Scheduled Job

Run a script periodically to check for new recordings:

// run-every-30-minutes.js
const rooms = await getRecentlyEndedRooms(); // From your database

for (const room of rooms) {
try {
const recordings = await fetchRecordings(room.roomId);

for (const file of recordings.files) {
await downloadAndUploadToS3(file);
await markAsDownloaded(room.roomId);
}
} catch (error) {
if (error.status === 404) {
// Room still live or no recordings
continue;
}
if (error.status === 410) {
// Expired - log and move on
console.error(`Recordings expired for ${room.roomId}`);
await markAsExpired(room.roomId);
continue;
}
throw error;
}
}

Option 2: Webhook Trigger (Coming Soon)

We're working on webhooks that fire when a class ends. Stay tuned!

Option 3: Polling After Class Ends

If you know when a class ends, start checking for recordings:

async function waitForRecording(roomId, maxAttempts = 30) {
for (let i = 0; i < maxAttempts; i++) {
try {
const recordings = await fetchRecordings(roomId);
return recordings; // Success!
} catch (error) {
if (error.status === 409) {
// Still processing - wait and retry
await sleep(10000); // 10 seconds
continue;
}
throw error;
}
}
throw new Error("Recording processing timeout");
}

Storing Recordings Long-Term

After downloading, upload to your preferred storage:

Amazon S3

const AWS = require("aws-sdk");
const s3 = new AWS.S3();

async function uploadToS3(localPath, key) {
const fileContent = fs.readFileSync(localPath);

await s3
.upload({
Bucket: process.env.S3_BUCKET,
Key: key,
Body: fileContent,
ContentType: "video/mp4",
})
.promise();

// Clean up local file
fs.unlinkSync(localPath);
}

// Usage
await downloadRecording(url, "/tmp/recording.mp4");
await uploadToS3("/tmp/recording.mp4", `recordings/${roomId}/class.mp4`);

Google Cloud Storage

const { Storage } = require("@google-cloud/storage");
const storage = new Storage();

async function uploadToGCS(localPath, filename) {
await storage.bucket(process.env.GCS_BUCKET).upload(localPath, {
destination: `recordings/${filename}`,
metadata: { contentType: "video/mp4" },
});
}

Azure Blob Storage

const { BlobServiceClient } = require("@azure/storage-blob");

async function uploadToAzure(localPath, blobName) {
const blobServiceClient = BlobServiceClient.fromConnectionString(
process.env.AZURE_STORAGE_CONNECTION_STRING
);
const containerClient = blobServiceClient.getContainerClient("recordings");
const blockBlobClient = containerClient.getBlockBlobClient(blobName);

await blockBlobClient.uploadFile(localPath);
}

Complete Workflow Example

Here's a full example of a recording download service:

// recording-service.js
const fs = require("fs");
const path = require("path");

class RecordingService {
constructor(verrifloOrgId, s3Bucket) {
this.verrifloOrgId = verrifloOrgId;
this.s3Bucket = s3Bucket;
this.downloadDir = "/tmp/recordings";

// Ensure download directory exists
if (!fs.existsSync(this.downloadDir)) {
fs.mkdirSync(this.downloadDir, { recursive: true });
}
}

async processRecording(roomId) {
console.log(`Processing recording for ${roomId}`);

// 1. Fetch recording info
const recordingData = await this.fetchRecordingInfo(roomId);

// 2. Download each file
const uploadedFiles = [];

for (const file of recordingData.files) {
const localPath = path.join(this.downloadDir, file.filename);

// Download
console.log(`Downloading ${file.filename}...`);
await this.downloadFile(file.downloadUrl, localPath);

// Upload to S3
const s3Key = `recordings/${roomId}/${file.filename}`;
console.log(`Uploading to S3: ${s3Key}`);
await this.uploadToS3(localPath, s3Key);

// Clean up
fs.unlinkSync(localPath);

uploadedFiles.push({
filename: file.filename,
s3Key,
size: file.size,
duration: file.duration,
});
}

return uploadedFiles;
}

async fetchRecordingInfo(roomId) {
const response = await fetch(
`https://api.verriflo.com/v1/room/${roomId}/recordings/download`,
{
headers: { "VF-ORG-ID": this.verrifloOrgId },
}
);

const data = await response.json();

if (data.success !== true) {
const error = new Error(data.message);
error.status = response.status;
throw error;
}

return data.data;
}

// ... downloadFile and uploadToS3 methods
}

// Usage
const service = new RecordingService(
process.env.VERRIFLO_ORG_ID,
process.env.S3_BUCKET
);

// Call after class ends
await service.processRecording("math-101-2024-01-15");

Best Practices

1. Download Promptly

Don't wait until the last hour. Aim to download within 1-2 hours of class ending.

2. Verify Downloads

Check file integrity after download:

const crypto = require("crypto");

function getFileChecksum(filePath) {
return new Promise((resolve) => {
const hash = crypto.createHash("md5");
const stream = fs.createReadStream(filePath);
stream.on("data", (data) => hash.update(data));
stream.on("end", () => resolve(hash.digest("hex")));
});
}

3. Handle Failures Gracefully

Implement retries with exponential backoff:

async function downloadWithRetry(url, path, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
await downloadFile(url, path);
return;
} catch (error) {
if (i === maxRetries - 1) throw error;
await sleep(Math.pow(2, i) * 1000); // 1s, 2s, 4s
}
}
}

4. Track Recording Status

Keep track in your database:

CREATE TABLE recordings (
id SERIAL PRIMARY KEY,
room_id VARCHAR(100) NOT NULL,
status VARCHAR(20) DEFAULT 'pending', -- pending, downloaded, expired, error
download_url TEXT,
s3_key TEXT,
file_size BIGINT,
duration_seconds INTEGER,
downloaded_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW()
);

Next: Troubleshooting — Common issues and how to fix them.