Quote:
Originally Posted by twobob
Ah, thanks very much.
I thought I was not being clear.
So in essence either
a) a hand made file: Slice up two seconds of audio - put as header (and any other data you want dumping in there), buffer it, splice in second of video (say), then swap back to audio etc.
or
b) just get a known format and handle it in that way - i.e higher priority to audio consumption.
I like the sound of b) better.
One thought: I will try cramming some dummy "audio" through that demo via loading a wave first and swapping it into the buffer, see what results.
Start there come to think of it. (I asked the thread to be renamed as you suggested, thanks)
|
I agree that option b is "better" for a TOOL. My "codec" (dither and bitpacking) was designed to be rendered by extremely small and simple C code useful for a monolithic tutorial. Packing 8-bits/byte was really just a tiny evolution from packing two 4-bit pixels already used in the animation demos.
My video transcoder and player are just a tiny evolutionary step beyond my existing animation demos, designed to be easy to understand and reuse. Rather than designing a new container, a "real" player should just squeeze them into an output "driver" of a standard player (like ffmpeg or mplayer) that can play standard media containers (like MKV). But then of course, my super-simple codec just happens to compress extremely well with with gzip because the 8x8 dithering packs to 8-bit (LZ-friendly) boundaries.
A "real" general purpose code may not compress kindle video quite so well.