Reputation: 341
I'm basically calling OpenAI's audio transcription API to convert an audio file to text. I understand that the API requires a file type of Core.Uploadable
, thus appending current timestamp lastModified
to the file creation.
An URI example is: file:///var/mobile/Containers/Data/Application/D19779A9-081C-4A53-B872-1492416A3ACD/Library/Caches/ExponentExperienceData/@anonymous/my-ai-assistant-84e367f8-fea0-4610-b263-9dfbe9752bc8/AV/recording-421304D5-BAF4-4CFF-9991-3838D1EE3398.m4a
I'm not sure why the model is not accepting the file.
export async function getTranscription(uri: string): Promise<string> {
try {
// Fetch the file from the given URI
const response = await fetch(uri);
if (!response.ok) {
throw new Error(`Failed to fetch file: ${response.statusText}`);
}
// Convert the response to a Blob
const blob = await response.blob();
// Create a File from the Blob
const file = new File([blob], "audio.m4a", {
type: "audio/m4a",
lastModified: Date.now(),
});
// Send the File to Whisper transcription API
const transcription = await openai.audio.transcriptions.create({
file: file,
model: "whisper-1",
language: "en",
});
console.log(transcription.text);
return transcription.text;
} catch (error) {
console.error("Error during transcription:", error);
throw error;
}
}
I get:
(NOBRIDGE) ERROR Error during transcription: [Error: 400 [{'type': 'missing', 'loc': ('body', 'file'), 'msg': 'Field required', 'input': None}]]
(NOBRIDGE) ERROR Failed getting transcription [Error: 400 [{'type': 'missing', 'loc': ('body', 'file'), 'msg': 'Field required', 'input': None}]]
Upvotes: 0
Views: 31