Please login or register. June 19, 2018, 12:40:41 PM

Author Topic: Trying to understand DCP encoding workflow  (Read 6456 times)

boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Trying to understand DCP encoding workflow
« on: January 26, 2015, 10:47:46 PM »
Hello!

I am curious about how users typically get their uncompressed data into the PC to get encoded.

A few options I can see are:

1) Compact Flash card ( with card reader on PC)
2) external SSD hard drive (plugged into eSATA port ? )

I assume that the data would be stored as uncompressed TIFF images.

The reason I ask is: I am working on an accelerated JPEG 2000 codec.  Currently, I can compress
a 2048x1024 RGB 12 bit TIFF in 45 ms. This includes reading the file from SSD, and moving it to the card.
I have been careful to clear the Windows file cache before running my tests, to ensure that they reflect real-world
workflow, where files are only read once before encoding, and are not cached.

Once files are read from disk, the encoding is quite rapid. So, the frame rate for my encoder is dominated by reading from disk.
In fact, once it is finished, I think I will be able to compress images as fast as I can read them.

According to my calculations, any DCP creation software will be limited by disk reads: they will not be able to compress 2K images faster
than about 20 frames per second.  For 4K, this upper limit will be 5 FPS.  Does this make sense? Or am I missing something.

Thanks very much,
Aaron

Wolfgang Woehl

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 312
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #1 on: January 27, 2015, 07:23:54 AM »
I'm seeing a wide range of setups out there. Anything from external USB 2.0 to fibre channel and thunderbolt RAIDs. Network too, of course. Take a thunderbolt RAID 0 and you might already look at rates way above the 200 MB/s of your example (12 bpc 2K @ 22 fps). I've seen 350 MB/s sustained on one of those.

What are the specs of your test system?

boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #2 on: January 27, 2015, 09:35:32 AM »
Thanks, Wolfgang.

I have a Samsung PM830 mSATA SSD,  and a Core i7-3770 Quad Core CPU.

The SSD is rated at up to 500 MB/s sequential read, but of course I am not getting close to that.

Thunderbolt sounds intriguing.  Also, Samsung has announced a PCIex4 SSD that claims over 2GB/s reads.

Stepping back a bit, what is a reasonable encoding rate for 2K video? 

Also, I understand that before compression, there is a QA check on the TIFF files to ensure that that they are stored correctly.
If this is the case, then the QA step should read the files into RAM, and the encoder would not be limited by disk reads.

Thanks again,
Aaron



luca

  • Jr. Member
  • ***
  • Posts: 56
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #3 on: January 27, 2015, 08:15:40 PM »
Hi Aaron,
there are various types of tiff files (even if  uncompressed) and there are different ways to read them. So if you use a library such as libtiff you have to read the header data first to figure out what type you have and then, according to the structure you want into the ram, you have to perform some change. But if the data is  already as you want it into the ram, you could read it with a single fread() call (or equivalent libtiff function call). This would minimize access time to the disk. Indeed I think the problem could be more access time to the disk  than throughput, even if you work with SSD where it is very low compared to harddisks.
I have never attempted to profile the performance of different libtiff functions, this is only a guess.

Luca


boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #4 on: January 28, 2015, 10:48:21 AM »
Thanks, Luca.  I think you're right - I don't think libtiff is performing sequential reads, so I am not getting the max read speed.

Oh well, I still should get 20 FPS encoding, which is pretty respectable.

Aaron

Terrence Meiczinger

  • Administrator
  • Hero Member
  • *****
  • Posts: 560
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #5 on: January 28, 2015, 05:55:34 PM »
If you want to support different input formats, then you kinda need to normalize the process. Ideally, you'd want to read a chunk, process the chunk, write the chunk. The chunks sizes could then be optimized for cache sizes and so forth. Also, while you are processing the chunk, the next one is being moved into cache. Unfortunately, you can't do that for all image types and each image type has it's own chunks (stripes, blocks, etc). So, to make it manageable, you kinda need to read an entire image into memory at once, process it, and then write to disk. Then, you also gotta figure in portability. So, all that probably means I/O is going to be a bottleneck especially when reading/writing to the same drive.


boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #6 on: January 28, 2015, 10:11:18 PM »
Thanks, Terrence.  Using the cache efficiently is a good idea, but I guess I am still limited by the read rate of the device.

Perhaps this is a case of premature optimization.

Aaron

Wolfgang Woehl

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 312
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #7 on: January 30, 2015, 09:07:32 AM »
Oh well, I still should get 20 FPS encoding, which is pretty respectable.

What's the product?

Wolfgang Woehl

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 312
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #8 on: January 30, 2015, 09:11:16 AM »
Perhaps this is a case of premature optimization.

Indeed. Assuming everyone would be using TIFF in the first place is wrong anyway. Again pretty much the whole feasible range, DPX, BPM, PNG etc.pp. and <ducks>JPG in some regions of the world :)

boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #9 on: January 30, 2015, 09:40:33 AM »
Oh well, I still should get 20 FPS encoding, which is pretty respectable.

What's the product?

I am working on a new JPEG 2000 encoder.  I expect to reach 2K frame rates of at least 100 FPS using a recent video card such as the Radeon 290X
or the upcoming Radeon 380X.  So, disk reads will limit the speed.

boxerab

  • New Member
  • **
  • Posts: 10
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #10 on: January 30, 2015, 02:15:07 PM »
Regarding image formats, it is quite easy to batch-convert other formats to TIFF using GraphicsMagick, for example.
So, I think it is safe to limit the supported formats to TIFF and DPX, for example.

Carsten

  • New Member
  • **
  • Posts: 32
    • View Profile
Re: Trying to understand DCP encoding workflow
« Reply #11 on: May 06, 2015, 06:50:55 PM »
Hi Aaron,


good to see someone is working on GPU accelerated J2k. Can you make the encoded j2k stream DCI compliant? Openjpeg offers special modes to make sure the generated codestream follows DCI/SMPTE constraints.

Are you working on your own DCP authoring software? Because, I think it would be more efficient to work together with programmers of the existing open source software for DCP creation. It should be no problem to hook in your accelerated encoder and replace the slow openjpeg CPU version. Their software will then handle all the input-output. They will supply a uniform, uncompressed RAW image format to your decoder then, as fast as they can. I can tell you, if your encoder is as fast as you say, you will have a hard time reading single images from disc. From my experience, it will usually be faster to supply compressed footage like ProRes, DNxHD. A decent machine will usually supply decoded frames from compressed files faster than it can load single image series. Of course, if needed, single image series must be supported, but will usually face some performance penalty. I have seen optimized CPU based encoders reach 25-40fps at 2k on modest hardware (Kakadu Speedpack) - but only from compressed footage. If you invest 1000€+ into an SSD based RAID, you may be able to achieve the same from image series.

Bear in mind, that SSDs can be quite fast - but a typical feature can easily have 1TB worth of single image series. Double/quadruple this for 10-16Bit images, 4k, or higher frame rate footage. You would need at least 1TB of SSD in your system just for the intermediate-storage to feed your fast encoder. Also, copying so much lossless data will take ages, even to/from SSD. And all that slow copying just to enable your very fast encoder to read the data quickly enough... you get the idea.

As I said, work with the programmers/contributors of the existing software packages, and use their existing I/O framework. I think they are already waiting for your GPU accelerated encoder.

- Carsten
« Last Edit: May 06, 2015, 07:02:00 PM by Carsten »

dcinemaforum.com

Re: Trying to understand DCP encoding workflow
« Reply #11 on: May 06, 2015, 06:50:55 PM »