image
Page 1 of 32 12311 ... LastLast
Results 1 to 10 of 314

Thread: The complexities of streaming HD video downlinks...

  1. #1
    Forum Member
    Join Date
    Aug 2011
    Location
    Brazil
    Posts
    685

    Lightbulb The complexities of streaming HD video downlinks...

    There are quite some complexities to consider for a streaming HD video downlink. I looked into this a lot more and posted a video showing images for analog and put them next to digital, differences in resolution size, etc... Note that my HD cam needs tweaking on the back focus, so it's not razor sharp as HD should be. Either that, or it's broken and must be returned:



    The analog video was recorded by the Lawmate DVR EVO-500 at 1280x960 (max resolution). The camera used was the TBS69. Excellent quality. Actual resolution amounts to 640x480 in the goggles. The digital video was recorded from a very low latency stream from my 720p HD camera, limited to 8000k/s. Overall latency screen2screen there is comparable to the live gopro out at this time, which is quite good when you think of it.

    The fatsharks can't reproduce the resolution of 720p, so there's no point taking all this effort to produce HD streams, only to rescale these later into the fatsharks to match it back up to the 640x480 display. Not given the complexity you need to deal with in the end.

    In terms of DPI and comfort though, there's still something useful to gain for us humans... The fatsharks produce an image that is around 40dpi or so when projected on a 40" screen, where 72-96dpi is a general target. The HMZ-T1 OLED's in comparison are much more precise and produce probably around 80dpi on the same setup. For example, with the HMZ-T1 you can read the upper status bar of the mac or ubuntu quite easily and in very sharp detail, when the gfx card resolution is set to 720p or even 1080p (HDMI). Where the regular OSD on analog easily takes 1/15th-1/20th of screen height to produce a letter, HD could reduce this effective size to 1/40th or even less of the screen height. So that means less OSD clutter and better images in one go.

    A wifi network is the easiest and only affordable (both in terms of money and weight) method to transfer video data, but not all AP's have the ability to configure what's needed for proper transfer. What you need is configuration stuff that maximizes throughput from one end (video), but also reduce latency overall. Most routers assume they will be used in a home setting with many clients. In this setup, it will be one client and a lot of attention needs to be given to that one.

    Another issue is the changing distance between station and access point. The sending router will require acks from the other endpoint for every packet send, but as the distance grows larger, it takes longer for the packet to reach the endpoint, plus longer for the ack to be received. So ack timeouts for routers *must* change, otherwise even if power is sufficient and you fly in an RF free zone, you're going to retransmit every bit twice. Typical home routers hardcode this timeout to around 50-300meters. So even if you could outfit a home router to do this, you're not likely to fly beyond 500m with it. So a router is needed that dynamically modifies this on the basis of a clever algorithm. The upper useful limit for these timeouts is around 50km max. by the way. A practical flight distance should be expected of around 25km or so for the above issues.

    Received video packets, when attempted to be played *immediately* after receipt, will only play so correctly in the most ideal of environments. That is, any temporary interference that causes a retransmission on the link layer will cause the running stream to produce artifacts and loss of frames. On the receiver's end, this means you're looking at a datastream that appears to have some jitter now and then and where this jitter increases with noise/distance. If you were to hand off packets in the order they came in and send them off for processing, you're processing them in the wrong order. The only resolution is to drop the frame entirely and process the next. This results in quite well-known mpeg artifacts, where some frames are only delta's and make an area fuzzy/green, where others are keyframes and blank out your video in one go. Video soup, here we go!

    The only way to resolve this is to introduce a very short "jitter buffer". This allows some late packets to skip ahead in the queue 'just in time', where the maximum allowable time is the configured latency. 5 ms resolves already most of the issues there, but if the environment is RF noisy, one should expect that this delay must be increased to not have the entire video disappear. If you want to handle environments with more unpredictables, this buffer will increase.

    Latency wise, I'm working with a camera that produces a 42fps video stream. The best predictor of latency is the fps produced by the camera. Higher fps's push out frames earlier, so latency decreases. This is at the expense of bandwidth however, because more keyframes over a second produces more data. Although between frames there is less movement, what counts is the total movement between keyframes, so data does increase relatively linearly with fps. When the bitrate limit is set however, the thing that is changed happens to be the fps, not the video quality. Video quality is therefore assumed constant, latency should not be considered a hard constant.

    This means a choice has to be made between desired quality, latency (fps) and available bitrate. Oh yeah... the actual environment you fly in will affect the apparent bitrate you have available... so that probably heavily affects your video quality for that particular area. This is also because 2.4Ghz wireless systems in the same band are required to play nice with one another. So lots of wifi in one area means reducing the duty cycles of your link, thereby reducing the available bitrate. Fortunately, most implementations of camera's allow you to reconfigure quality at runtime without a restart or reinitiation of the stream, although it may take a couple of frames before it's in effect (cue digital soup). Cropping images, reducing apparent quality seems to be the only way to go.

    The required rate you must configure is about twice the expected bitrate for the camera. The reason for that is to keep the latency low. A saturated network link will typically have a high latency, although it does seem to push all the packets through. This is easily verifiable at home by opening a browser when a large download is active. The browser seems to stall heavily. A budget about twice to three times the expected bitrate is needed to keep the latencies acceptable and produce an agile connection.

    MCS-4 (39Mbit/s, about 4* the budget) is my target number if you're streaming data back up that needs to arrive with predictable latency. The predictability varies between 20-40ms (which admittedly isn't very good). Higher rates typically use 64QAM modulations, which only work when you can expect little intra-symbol interference. So... once again... the lower bitrate you can sustain the more stable your link will be, because of less likelihood and less impact of interference. When the environment gets more RF noisy though, any high MCS-rate that is chosen will very quickly have to scale down, reducing the budget you have available for whatever you want or need to transmit plus any other stalls in the link renegotiation.

    I did some tests on the ground, basically walking my plane, some days ago and passing trees and cars on the ground 100m away already caused the buffer jittering I was mentioning on a direct stream without 5ms buffer. Passing a metal building caused the entire stream to go dead! I couldn't verify with analog to check how that turned out. And yes... that was 500mW of power at 350 meters.

    So... HD sounds like a great thing, but is wrought with very complex issues. In terms of predictability there's not much you can count on, but if the area you fly in is noise-free and beautiful, there's a very good potential to make a memorable flight. What you *must* definitely have is RTH, an autopilot and probably your own software to dynamically reconfigure stuff when needed. It also certainly demonstrates that it's not for everyone, as you have to adjust your flying style to cope with temporary setbacks. The technology really leaves you dead in the water at times, whereas our "brain functionality" allows us to deal with things in the face of analog static and gradual changes.
    Last edited by Coyote; 7th April 2012 at 09:04 PM.

  2. #2
    Forum Member
    Join Date
    Nov 2011
    Location
    USA
    Posts
    6,044
    the solution to HD will be a high bandwidth algorithm similar to what Dragonlink and other UHF use. basically FASST. i am working on this now with an engineer at northrop grumman

  3. #3
    Forum Member
    Join Date
    Aug 2011
    Location
    Brazil
    Posts
    685
    Quote Originally Posted by chatch15117 View Post
    the solution to HD will be a high bandwidth algorithm similar to what Dragonlink and other UHF use. basically FASST. i am working on this now with an engineer at northrop grumman
    COFDM? UWB? DVB-T? Whatever it is...how do I get one? The weight should also be around 100g or so. More than that becomes too heavy for my zii .

  4. #4
    Forum Member
    Join Date
    Aug 2011
    Location
    Brazil
    Posts
    685
    Ok... so I learned a bit more about all the different settings and what works and what doesn't... Found an interesting thread here: http://www.rcgroups.com/forums/showp...4&postcount=76
    seems to be some COFDM module with very nice performance: http://www.rcgroups.com/forums/showp...&postcount=312 . I'm actually going to test one in a week's time, so looking forward to that!

    These COFDM's are still too large, too heavy and too hefty a price. Right now, I'm using "standard, consumer-level" (grin) wifi routers that do fit on planes (80g or so). Here's a video of today, easter bunny *not* included:

    http://www.youtube.com/watch?v=abdgf...ature=youtu.be

    That's what you should expect for a very low latency UDP multicast (!) link. As you can see... a load of crap! . Video here was shot with 65Mb/s, 500mW of power on both sides, 3km ACK timeout setting, directional antenna, SPW, unlimited bitrate on the cam, packet aggregation and 640x480 resolution... and still no steady picture!?!?!! I made an HD one earlier in the day and both came out pretty similar in terms of range and quality.... interesting, right? The IMU was also connected and you can see that it complains about not receiving control signals within 40ms (FAILSAFE).

    So what's the main issue in video over wifi? -> unpredictable latency!

    Power and bandwidth are completely secondary concerns here. In full LOS, I managed to get some useful video bits, but as you can see it was pretty shaky and unstable even then. Most of the times when some HD cam is used for video, the camera is put in a spot and spikes don't really occur. This means the wifi module has time to get adjusted to some general bitrate that the camera attempts to send over. When large changes in the image occur, you should expect the wifi to stutter and not send out anything. A static video of some static place has low, steady bitrates. An HD cam on a plane goes everywhere from 2-12 Mb/s in under a second. One should also start asking questions about the camera's ability to deal with that ( well, I verified mine and it seems ok). So wifi is able to handle steady bitrates quite well, but has lots of problems adjusting to large changes in bitrate (apparently).

    The grey bitsoup in images that you sometimes see is the result of bandwidth spikes, where the camera drops frames in order to not exceed the max. bitrate. When the camera config is changed to model the correct bitrate for that quality and fps, it's all working fine. In the end I reconfigured everything to 10mW to reconsider the latency issues.

    I now configured my system totally differently. Instead of going for 65Mb/s, I chose the most restricted modulation type MCS-2 (QPSK) and hardcoded that in: http://linuxwireless.org/en/develope...e80211/802.11n
    This gives me 19.5 Mb/s whatever the RF environment. Divided by half to allow some budget to not let latency increase to infinity, I get around 9500 kb/s for video. The objective of MCS-2 is to reduce the impact of heavy interference and prevent retransmissions. 20/40MHz bandwidth didn't give me anything useful, so I left that out (basically this means more bandwidth is not the issue, but arriving in time is). ACK timeout is set to 25km hardcoded ( so no adaptivity there too). If I decrease this, I can improve the resilience a bit. Increasing this is going to be tricky, so I'd take 25km as a very good max. flight limit for wifi for now. The most significant change occurred when I raised the band by a MHz or so above the official band. The huge stutter, instability, blackouts, bitsoup were all gone and it clearly become largely an issue of configuring the parameters to get the right performance. Interestingly, with the incorrect parameters, 640x480/HD performed equally well, so it really is about getting all packets for a frame or nothing at all. Tomorrow I may just cycle around once again with these other settings, just to see if that helps a lot.

    Afterwards, if that doesn't work out yet, I still have some other tweaks up my sleeve:
    - H.264 over TCP with sdp (probably requires a 80ms TCP buffer to allow for retransmissions).
    - Constant bitrate... if wifi adjusts to steady bitrates, then probably removing the really heavy spikes out of the connection is going to get me some more predictability there at the cost of heavy quality degradation in animated sequences.
    Last edited by radialmind; 8th April 2012 at 05:04 PM.

  5. #5
    Forum Member
    Join Date
    Nov 2011
    Location
    USA
    Posts
    6,044
    latency is too high, and we need a compression algorithm that will bring the bandwidth to 1-2mbit

    http://0x000000.info/%5Cdropbox%5Cmisc%5C40ltxc0.pdf
    read that

  6. #6
    Forum Member
    Join Date
    Aug 2011
    Location
    Brazil
    Posts
    685
    Well, I read it... don't see how it's relevant to a connection over many kilometers that requires a solid modulation scheme and FEC to deal with interference.


    I worked and read up a bit on the compression configurations for video today. I wasn't happy with the spikyness and wanted to find out what a regular, correct setting for compression should be. If you're storing stuff on a card onboard, then probably you're going for a very high lossless setting (where you do need a largish buffer, so that explains the 10s missing when gopro's die). On my camera this parameter is called "QR", but it seems to actually be the well-known "crf" parameter for H.264 video. I also noticed I had this set to 18, where a setting of 23 is perfectly acceptable and that provide the kind of quality in videos that are procured through a certain means... (usually 700M / 1.2GB in size). This means you can see compression when you look for it, but it's not in your face all the time. The compression comes out especially in areas when there is darkness and very slight gradients.

    Constant bitrate gave me some bad results fps-wise and lots of changes in fps throughout the entire session. However, a variable bitrate of 8000kb/s (kilobit) plus a better quality setting of 23 now generates a bitstream that's rather constant and has very low negative side effects. One negative side effect is that bitstreams up to 9000kb/s produce heavily lagged images when the motion is changed heavily, up to 500ms. The 8000kb/s setting didn't have these extreme side effects, although in 100% motion (swaying the camera 90 degrees left to right towards totally different scenes) did show that the general delay does increase a bit by some 100ms or so.

    I saw this post from old man mike on a different forum, where he analyzes the latency one should expect from 720p on the AR.drone boards (result: 200ms). They apparently have 720p live streams too, although I'd think the wireless onboard there doesn't allow one to go out too far: http://www.ardrone-flyers.com/forum/...pic.php?t=2727 . They use an onboard ARM processor afaik and it sounds as if they must be using gstreamer or something to generate the video, probably ending up to similar bitstream rates.

    My measurements are similar. The timer on the stopwatch I was using though doesn't have a high update frequency, the screen has a certain vertical refresh frequency and the camera has a variable fps as well due to the bitrate limitation. This skews readings quite heavily from one reading to the next and I get latencies between 110ms down to 210ms. Let's take an average there and call it 150ms of latency as a very good balance between image quality, image resolution, stream stability and range. I briefly tried improving the latency with a lower resolution, but I didn't see a noticeable difference there. Some time later I may try in more depth.

    So far, couldn't test this thing outside again due to rain. Looking forward to see how the video holds out in the same situation and whether I can go around the block this time maintaining the video.

    Changes made since last run:
    - reduced video quality setting to 23
    - reduction in bitrate from 12000 to 8000
    - lowered MCS-4 wifi modulation/FEC down to MCS-2 modulation/FEC (QPSK@3/4), yielding 19.5 Mb/s, latency increasing when bandwidth saturates.
    - removed jitter buffer in streamer pipeline on decoder side (no longer necessary, it now renders directly).

  7. #7
    Forum Member
    Join Date
    Mar 2011
    Posts
    1,076
    tl;dr-all-of-it but:
    fhss is ineffecient for large data rates (> 1mbit)
    720p high profile is excellent at 4mbit
    the additional bandwidth comes from lower profiles (baseline/main) and lower quality encoders (compensate with bandwidth)
    in all cases there are several technics to lower the latency, all of which are demo'd in x264 (which hasnt been ported to arm in a dsp accelerated fashion, this explains that)

    For example here's a demo of what I use (its 720p30, encoding/transmit via camera, decoding by laptop via ffdec_h264 from gstreamer).
    this is in "high latency" decoding mode and "low latency" encoding mode, end to end, seems to be about 100-200ms. its about 100ms in low latency decode but i havent implemented it yet in a satisfactory way for these (theyre new cameras/chips)

    bandwidth peaks at ~2.7mbit during movement
    transmission is done via custom drivers on top of of the shelves wifi chips (aka theres no tcp/ip stack involved)

    Last edited by ZobZibZab; 9th April 2012 at 06:41 PM.

  8. #8
    Forum Member
    Join Date
    Mar 2011
    Posts
    1,076
    here's also the direct camera output
    it has auto focus, auto exposure, and color correction all auto
    of course, it also supports fixed focus; fixed exposure, etc. the exposure correction takes a little longer than i'd like but its a rather extreme switch here

    during very fast movement (im shaking cam as fast as i can) the rolling shutter makes it wobble a bit (like all other cmos cams) but its not much worse than my GP2.
    quality wise you'll see it pixelize a bit when its really fast. its because the bw is limited to <3mbit. As usual YT reencode makes it a little worse than it is.



  9. #9
    Forum Member
    Join Date
    Sep 2011
    Location
    Seattle, WA USA
    Posts
    3,048
    ZZZ:

    Great work, I've been following your posts.

    A couple of questions, as I recall you are running a hacked router/device driver to make this run. Basically no tcp/ip, straight communication and decode of a raw digital stream, sort of like serial I guess. Is that still the case?

    Since you are sitting on wifi I presume you are just using a single channel. How many parallel streams on different channels do you think this could support? Could every other channel work or would you need every 3rd or 4th?

    Since you are sitting on open protocols and code developed for streaming video with no (FPV wise) limits on latency do you think a purpose built camera + a purpose built protocol could do much better in terms of latency? I have to think that some optimizations in the camera and the protocol could make your life easier.

    Great work!

  10. #10
    Forum Member
    Join Date
    Mar 2011
    Posts
    1,076
    yes thats still the case
    i'm using a single wifi channel but due to the way wifi works its using 20mhz of space and OFDM which is not all that different from using multiple channels in practice
    you could stream every second channel if you wanted tho, and if thats 802.11n chips you've quite a few

    purpose built stuff is better but its very difficult to find proper cameras with specs for "free" (or less than 5-10 000 USD) (<= talking about the sensor alone here, not "cofdm premade cameras")

    that's why i'm using the above camera, its not perfect but its < 100 USD
    Last edited by ZobZibZab; 9th April 2012 at 07:20 PM.

Page 1 of 32 12311 ... LastLast

Similar Threads

  1. HD live streaming
    By flyboy_____ in forum HD RECORDING / VIDEO EDITING
    Replies: 2
    Last Post: 30th December 2011, 10:43 AM
  2. Replies: 10
    Last Post: 30th June 2011, 09:03 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •