Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cellphones Network Wireless Networking

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds 129

colinneagle (2544914) writes "What if you could transmit data without link layer flow control bogging down throughput with retransmission requests, and also optimize the size of the transmission for network efficiency and application latency constraints? In a Network World post, blogger Steve Patterson breaks down a recent breakthrough in stateless transmission using Random Linear Network Coding, or RLNC, which led to a joint venture between researchers at MIT, Caltech, and the University of Aalborg in Denmark called Code On Technologies.

The RLNC-encoded transmission improved video quality because packet loss in the RLNC case did not require the retransmission of lost packets. The RLNC-encoded video was downloaded five times faster than the native video stream time, and the RLNC-encoded video streamed fast enough to be rendered without interruption.

In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet."
This discussion has been archived. No new comments can be posted.

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

Comments Filter:
  • by Anonymous Coward

    and what if the coefficients get lost or corrupted?

    • by Anonymous Coward

      It's either the entire packet comes or it's lost.

  • by hamster_nz ( 656572 ) on Friday May 30, 2014 @12:34AM (#47126485)

    "A better coding for data error correction and redundancy than Reed-Solomon" - this is News for Nerds after all.

    And why the "oooh - flappy birds on my phone might be faster" slant? I want a faster SAN!.

    • by Ajay Anand ( 2838487 ) on Friday May 30, 2014 @01:44AM (#47126705)
      Reed-solmon is theoretically best. I hope this new encoding is practically better than Reed-Solmon.
      • by AK Marc ( 707885 )
        How is R-S theoretically best, when it's been beaten by almost everything else in wireless FEC?

        I saw this and was looking for the tech details. It looks like they are putting in FEC at the application layer. It's been done before. They are just hiding it better to not have to explain why 1/8th of the data is padding (or whatever ratio they are using).
        • by Anonymous Coward

          Probably from a misleading name. For example, Golay codes are perfect codes. Despite being perfect, there are better choices.

          • by AK Marc ( 707885 )
            And it may be different applications. R-S may be "perfect" for the type of coding it does, but it isn't purely FEC, so it's both perfect, and beaten by lots of codes for wireless FEC. Or it's perfect for "random" drops/noise, but noise/drops aren't random. So many others can beat it because "perfect" is a mathematical truth, but doesn't work for the real world.
        • by Zironic ( 1112127 ) on Friday May 30, 2014 @04:55AM (#47127213)

          Because R-S assumes uniformly random error distribution which is usually not the case when it comes to wireless interference.

          • You win the internet - quote TFA - "tests show it could deliver dramatic potential gains in many use cases" (my emph.)
          • by delt0r ( 999393 )
            That why we use turbo codes, or at least interleaves. The problem with RS codes is that to optimal decode them requires quite a bit of hardware/cpu cycles.
    • >"A better coding for data error correction and redundancy than Reed-Solomon" - this is News for Nerds after all.

      Well, yeah. There's lots of things better than Reed-Solomon for doing video encoding in a lossy environment. I worked on such a project at UCSD in 1998 or so with John Rogers and Paul Chu that could eat pretty impressive amounts of noise and still have a usable video signal without retransmissions. It's start getting pretty muddy looking when you really cranked up the losses, but unlike JPEG o

  • by preaction ( 1526109 ) on Friday May 30, 2014 @12:49AM (#47126525)

    Sounds like Parchive from usenet, which is a really good idea in a lossy environment now that I think about it.

    • by NoNonAlphaCharsHere ( 2201864 ) on Friday May 30, 2014 @12:57AM (#47126555)
      Yup, sounds a lot like par2, except this system works inline, in a streaming context to repair mangled blocks/packets, where par2 uses out-of-band data (the par2 files) to do the repairs after all the data is transmitted.
      • by TubeSteak ( 669689 ) on Friday May 30, 2014 @01:23AM (#47126657) Journal

        And like par2, it's going to require a healthy amount of processing from your CPU

        The trends to higher-performance multicore processors and parallel operations everywhere in the network and on mobile devices lends itself to an encoding scheme utilizing linear algebra and matrix equations that might not have been possible in the past.

        Notice they talk about multicore processors and not some hardware decoding embedded in the networking chip.
        From their published paper

        Abstract-- Random Linear Network Coding (RLNC) provides
        a theoretically efficient method for coding. The drawbacks associ-
        ated with it are the complexity of the decoding and the overhead
        resulting from the encoding vector. Increasing the field size and
        generation size presents a fundamental trade-off between packet-
        based throughput and operational overhead
        . On the one hand,
        decreasing the probability of transmitting redundant packets is
        beneficial for throughput and, consequently, reduces transmission
        energy. On the other hand, the decoding complexity and amount
        of header overhead increase with field size and generation
        length, leading to higher energy consumption. Therefore, the
        optimal trade-off is system and topology dependent, as it depends
        on the cost in energy of performing coding operations versus
        transmitting data. We show that moderate field sizes are the
        correct choice when trade-offs are considered. The results show
        that sparse binary codes perform the best, unless the generation
        size is very low.

        Processing power is going to be an issue in mobile devices which have the most to gain from this innovation.

        • by mcrbids ( 148650 )

          Processing power is going to be an issue in mobile devices which have the most to gain from this innovation.

          I don't see this as a problem at all. This is the type of thing that screams for a dedicated processor, not a GP chip, and it's typical to see a 99% reduction in power consumption when you go this route.

      • by AK Marc ( 707885 )
        If you named all the files the same, then the repairs are done in-band. All that's "new" here is calling all the packets *.PAR, rather than some RAR and some PAR. You need one extra block per block lost (though you can have multiple blocks per packet and other tricks to hide the actual redundancy).
  • by bugs2squash ( 1132591 ) on Friday May 30, 2014 @12:51AM (#47126531)
    they've invented forward error correction, this could enable data communications and audio CDs.
    • by nyet ( 19118 )

      FEC generally does not seek to recover lost data, only the proper state of flipped bits.

      • by Guspaz ( 556486 )

        There's no difference between the two (what's the difference between a 1 flipped to a 0, or a 1 that was lost and so interpreted as a 0?), and FEC is frequently (typically?) used to recover lost data. Interleaving is often used such that losing an entire transmitted packet results only in losing small parts of multiple actual packets, which FEC can then compensate for.

        • by nyet ( 19118 )

          FEC algorithms do not treat (an unknown number) of lost bits as a stream of 0 bits, since it can't know how many bits are lost.

          • by Guspaz ( 556486 )

            They don't need to. The underlying layer knows how many bits are missing based on the timing.

      • FEC is commonly used in streaming over IP applications to support lost packets, see for example the "above IP" FEC used in LTE multicast (eMBMS) or DVB-H, Raptor [wikipedia.org]. In those applications the L2 transport has its own error detection scheme, so IP packets will typically be either fully lost or ok (in other words, the probability of having a corrupt packet is very low).
      • Wow, were you aiming for humour? - Because it would be sad if you were serious

        What people are saying is they have implemented FEC at the packet level. I don't know much about wireless but I do remember the maths lectures I attended 25yrs ago on error correction techniques. At the time I could perform the FEC algorithm by hand, I consider myself "mathematically inclined" and have been awarded bits of paper attesting to that inclination but to this day I still have just enough understanding of the mathemat
      • by hhawk ( 26580 )

        I would say FEC to is correct the current bit state, while this method RLNC is to correct a past or future packet.. I am totally suggesting is with fast enough processing power the protocol would assume every packet is missing and have recovered packets "ready to go" if one is missing...

    • Do all of these ultimately result in a corrected transmission? How about a transmission that degrades gracefully instead, so you don't have to re-transmit OR send redundant information that isn't even used unless packets are lost. Each packet would contain only a little low-frequency information , since most of the bits are high-frequency information that will not be missed as much. Maybe your smartphone could receive 4k video broadcasts and just discard 80% of the packets before even decoding them. I'm
    • Right. (Score:5, Insightful)

      by Animats ( 122034 ) on Friday May 30, 2014 @03:36AM (#47127007) Homepage

      This is yet another form of forward error correction. The claim is that it doesn't take any extra data to add forward error correction. That seems doubful, since it would violate Shannon's coding theorem.

      This was tested against 3% random packet loss. If you have statistically well behaved packet loss due to noise, FEC works great. If you have bursty packet loss due to congestion, not so great.

      • by MobyDisk ( 75490 )

        The claim is that it doesn't take any extra data to add forward error correction. That seems doubful, since it would violate Shannon's coding theorem.

        Yeah, that would be impossible. But I don't think they meant to claim that there is no extra data. When they say "The combined packet length is no longer than either of the two packets from which it is composed." I took that to mean that each packet is a fixed length. Not that they didn't add extra data to get to that packet length.

        This was tested against 3% random packet loss. If you have statistically well behaved packet loss due to noise, FEC works great. If you have bursty packet loss due to congestion, not so great.

        Yeah. It might be good for for wireless networks, where even a "5 bar" wireless network connection has a fairly consistent 1% packet loss. The QUIC [connectify.me] protocol is another atte

    • What if you could transmit data without link layer flow control bogging down throughput with retransmission requests

      TFS makes it look like network coding can magically send data without any form of ACK & Retransmit. Network coding still requires feedback from a flow control protocol. You need to tweak the percentage of extra packets to send based on the number of useful / useless packets arriving, and you still need to control the overall rate of transmitted packets to avoid congestion. The goal is to make sure that every packet that arrives is useful for decoding the stream, regardless of which packets are lost. So

      • Is there a paper I could read. This looks ineresting.
        • I'm thinking of Network Coding Meets TCP [google.com]. Though that doesn't give a great background. I've experimented with my own implementation, but had to shelve it due to lack of time. I'll try to quickly summarise the core idea;

          You have packets [p1, .. pn] in your stream's transmission window.

          Randomly pick coefficients [x1, ..., xn] and calculate a new packet = x1*p1 + x2*p2 + , .... (in a galois number field, so '+' is an XOR and '*' is a pre-computed lookup table). Sending the coefficients in the packet header.

    • by m00sh ( 2538182 )

      they've invented forward error correction, this could enable data communications and audio CDs.

      I think they combined FEC with TCP.

      TCP sends packet and then waits for ACK. However they eliminated the ACK part and just added extra parity in the packets. So, even when packet is lost, it can be reconstructed.

      Of course there is a probability that multiple packets are lost so that it cannot be reconstructed. Of course, there is more redundant data in the stream instead of the small ACK packets. The packet loss characteristics of the channel probably makes this approach more efficient.

      • The ACK is crucial. Even if you pipeline your packets so you do not have to wait for each ACK to return, they are still vital to measuring network capacity. Too many people trying to send too much data will eventually overwhelm the transmit buffer at some congestion point, and loss will shoot past the ability of your error correction to compensate for.
    • So this is basically an efficient redundancy transmissiin scheme that out-performs (in spite of using extra data) retransmission.

      At a 3% artificially-induced error rate. What about real-world conditions? Wouldn't this clog a network more, not less?

  • And why do they use TCP if they are trying to avoid retransmissions due to lost/corrupt packets?

    This seems to say that it's most trying to avoid link-layer retransmission, not transport-layer. So somehow I need to figure out all the links my transmission is traversing and disable link-layer retransmission on all of them?

    • yeah TFA says it's link-layer flow control:

      What if you could transmit data without link layer flow control bogging down throughput with retransmission requests, and also optimize the size of the transmission for network efficiency and application latency constraints? Researchers from MIT, Caltech and the University of Aalborg claimed to have accomplished this...

      it can "piggyback" on TC-IP but you have to use their "software" on either end

      RLNC encoding can ride on top of the TCP-IP protocol, so implementatio

      • by AK Marc ( 707885 )
        I think they are lying.

        Many of the details fail under closer examination. It doesn't "use" TCP-IP. It could use a propritary IP stack that is TCP/IP compatible. So it'll use IP addresses and port numbers like a TCP packet would so that switches and routers in the middle wouldn't know or care what it's doing. If they put it on an IPX/SPX core it'd fail to route across the Internet. But it isn't TCP. It doesn't use a TCP compliant stack. It just looks like one to the outside world. It will not re-t
        • Comment removed (Score:4, Informative)

          by account_deleted ( 4530225 ) on Friday May 30, 2014 @11:42AM (#47129177)
          Comment removed based on user account deletion
          • by AK Marc ( 707885 )

            And just a nit for you, AK Marc - if someone says UDP is "running over TCP/IP", tell them to put down the router and step away from the rack. They just aren't qualified.

            How about "UDP is a member of the TCP/IP suite of protocols"? Though most leave off the "suite of protocols" at the end, it's implied. I know lots of people smarter than you that would say "UDP is a subset of TCP/IP" or some other wording to that effect. Even Wikipedia [wikipedia.org] agrees with me. TCP/IP is shorthand for The Internet Protocol Suite, which includes UDP. So UDP "runs over" TCP/IP

        • I think the writer is confused. This sounds like a non-routed layer 2 error-correction protocol for error prone networks, like cell phone data and Wi-Fi. The only relation this has to the Internet is that it can carry TCP/IP traffic, just like Ethernet, Frame Relay, ATM, and any number of other layer 2 protocols can carry TCP/IP.

          • by AK Marc ( 707885 )
            I think they are using UDP, but didn't want to get into specifics, so the writer screwed it up. UDP is a member of the TCP/IP suite of protocols.
            • Yeah, they probably are using UDP in some video tests, but I think that is irrelevant. If this is a competing technology to Reed-Solomon encoding, then it is almost certainly a data-link layer protocol and is agnostic to network or session protocols.

              From Wikipedia:

              Reed–Solomon codes have since found important applications from deep-space communication to consumer electronics. They are prominently used in consumer electronics such as CDs, DVDs, Blu-ray Discs, in data transmission technologies such as DSL and WiMAX, in broadcast systems such as DVB and ATSC, and in computer applications such as RAID 6 systems.

              I think the writer's mistake was in seeing everything through Internet-colored glasses. The article specifically says "link layer", not "session layer". TCP is a session layer technology. My best guess from the article is that this is not an e

              • by AK Marc ( 707885 )

                It is an error correction protocol to reduce OSI layer 2 packet (or more properly, "frame") retransmission.

                But the "frame" doesn't exist end-to-end, but is re-created for each network segment. So their statements that it runs on the endpoints and doesn't require any changes in the middle indicates it is a higher layer protocol. Some of the layer 2 already include FEC. So why have double FEC? (it's bad). It's not double FEC at the same layer, but an application-layer FEC (based on their description). You can have them on different layers because they are correcting different things. L-2 FEC is correcting bit

    • And why do they use TCP if they are trying to avoid retransmissions due to lost/corrupt packets?

      This seems to say that it's most trying to avoid link-layer retransmission, not transport-layer. So somehow I need to figure out all the links my transmission is traversing and disable link-layer retransmission on all of them?

      I believe the issue is that you can't sell it to the cable companies and the DSL providers that implement PPPOE in order to track your surfing, to make sure you are buying your television programming from them, rather than file sharing, and they can intentionally make things like Tor not work. Not that PMTUD works on those things unless the modem proxies the ICMP messages, which are usually blocked by the cable companies, unless you explicitly ifconfig down to 1492 yourself, or enable active probing for bl

  • by CaptainStumpy ( 1132145 ) on Friday May 30, 2014 @12:56AM (#47126549)
    Xfinity video in your face 4650% faster! Xfinity introduces the RLNC fast lane data transmission! Its like an over caffeinated jaguar solving linear matrices while orbiting the earth in the space shuttle and doing coke. RAAAWRR! Don't like the jaguar? Tough floating jaguar shit, you don't have a choice! We own teh tubes! ©omcastic!
  • by Anonymous Coward

    In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet.

    If that's "over-simplified" I am not sure I want to try to read the paper. That paragraph alone gave me a headache. Maybe it's the grapevine and the paper hurts less to read. Meh, tomorrow.

    • by AK Marc ( 707885 )
      "Application level FEC" Is that easier? They try to make it sound cooler, or harder, but it's been done before.
    • by skids ( 119237 )

      If that's "over-simplified" I am not sure I want to try to read the paper. That paragraph alone gave me a headache. Maybe it's the grapevine and the paper hurts less to read. Meh, tomorrow.

      If it is like most such articles, they use 35 pages of polynomial math over a GF(2) field as a way to mathematically formalize what could be better explained with a one-page circuit diagram with a few XOR and shift registers and one paragraph of commentary, so yeah, wait for the boiled down version.

    • by tibit ( 1762298 )

      There's no single paper. There's a lot of papers that refine the technique and apply it to different problems. To even begin to understand it, you must have a good grasp of network coding as it's actually applied in real life. The butterfly network is but a starting point.

  • Word (Score:5, Funny)

    by Dan East ( 318230 ) on Friday May 30, 2014 @01:00AM (#47126567) Journal

    "the immediately earlier sequenced packet". There a word for that. It's called "previous". As in "the previous packet".

    • by Anonymous Coward

      Head east, Dan. Head! East!

      Nothing says the previous packet has the previous sequence number. Packets may arrive in any order, or not arrive at all (the point here). The stack makes it all work at the ends. This is all layman should need to understand, and as you have shown, all they can be expected to understand. It was so wored for a reason, and it it's lost on the layman, that's all right.

      • Nothing says the previous packet has the previous sequence number

        The word "previous" says the previous packet has the previous sequence number. I that reasonable people skilled in the art would generally assume previous meant sequentially previous, not time-of-receipt previous.

        However I would say is that nothing says that the "immediately earlier" packet has the previous sequence number.

        We could get pedantic and talk about sequentially previous and chronologically previous, further clarifying that chronologically previous is from the point of view of the receiver (since

        • I that reasonable people skilled in the art would generally assume previous meant sequentially previous, not time-of-receipt previous.

          Sure, just like readers are able to reconstruct the missing second word in your post. But the word "previous" has some ambiguity. The original wording is more precise and specific.

    • by AK Marc ( 707885 )
      Maybe they are trying to cover the case where they come out of order, then there's a drop?

      But I agree in principle, the whole thing was poorly worded. Like it was written by a writer that doesn't understand it. That was dictated to by an engineer who was ordered to not be specific about it.
    • by c ( 8461 )

      There a word for that. It's called "previous". As in "the previous packet".

      When you're talking about packet protocols, where things get lost, duplicated, or reordered, "previous" is an incredibly imprecise word.

  • I remember working on a project where network coding was proposed for micro satellite cluster communications. If I remember correctly, network coding requires that all the nodes in the network have complete knowledge of the state of the network at any given transmission window. This requires transmission of the network state which used something like 7% overhead. The routing of a message from one end of the cluster to the other was difficult. I believe it might have been an np-complete problem. Have th
  • If the RLNC is five times faster than the native lines, It mean that the entire transmission network may go on a roll?
  • How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

    I wish they also made some breakthrough to increase the data caps we are stuck with.

  • The random element seems to be there to avoid having to come up with an optimal distribution of information. Otherwise, it seems like a frame-level (or even finer) RAID.
    • by AK Marc ( 707885 )
      Go look at Silver Peak. They do a WAN optimization that uses variable FEC to increase FEC as packet loss increases, to minimize waste as conditions change.
  • by antifoidulus ( 807088 ) on Friday May 30, 2014 @02:03AM (#47126767) Homepage Journal
    TFA doesn't seem to state what their assumptions were on how packets get lost and how many packet losses the algorithm can deal with, and what their distribution is. There are a lot of ways you can drop k packets out of n packets sent.

    If you assume that every packet has a k/n chance of being lost, then being able to reconstruct a single missing packet could be incredibly useful. However cell phone packet losses tend to be incredibly bursty, i.e. they will have a very throughput for a while, but then all of a sudden(maybe you changed towers or went under a bridge etc) lose a whole ton of packets. Can this algorithm deal with bursty losses? I wish TFA was a bit more clear on that
    • (I haven't read this paper yet, but I've read other Network Coding data and experimented with the idea myself)

      With TCP, when a packet is lost you have to retransmit it. You could simply duplicate all packets, like using RAID1 for HDD redundancy. But this obviously wastes bandwidth.

      Network coding combines packets in a more intelligent way. More like RAID5 / RAID6 with continuous adaptation based on network conditions. Any packet that arrives may allow you to deduce information that you haven't seen before.

  • The story linked to seems to have an awful explanation of what's going on. This makes a lot more sense:

    http://www.codeontechnologies.... [codeontechnologies.com]

    Reminds me a little of a random project I started back in college where I'd transmit a file in a bunch of packets where each contained the original file modulo a specified prime number. That way, if the file was split into 10,000 packets, then the transmitter could send out a constant stream of packets module different primes and as soon as the receiver got any 10,000 of th

    • It might be possible now with IPv6.

    • That link shows their website, where they tell us that if the sender is about to send packet 83, and figures out it didn't get an acknowledgement for packet 22, it then has to resend packets 22 to 82. Which seems an entirely stupid thing to do, and an obvious improvement would be to resend only the packets that are actually lost.

      Guess what: According to the Wikipedia article about TCP, that's what the "selective acknowledgment" (SACK) option defined in RFC 2018 does. So _poof_ goes the benefit of this sc
    • "That way, if the file was split into 10,000 packets, then the transmitter could send out a constant stream of packets module different primes and as soon as the receiver got any 10,000 of them they could use the extended euclidean algorithm to reconstruct the original file."

      So if the receiver got 10K copies of the 1st packet and nothing else it could still reconstruct the file? Thats impressive. Which college was this, Hogworts?

      • by skids ( 119237 )

        So if the receiver got 10K copies of the 1st packet and nothing else it could still reconstruct the file?

        Considering each one is only sent once, that would be some feirce level of broken compound load balancing.

        Note he said each packet contains the modulus of the entire original file with a different prime. The only thing that would cause duplicate packets would be running out of acceptably-sized primes.

        • by Viol8 ( 599362 )

          You need to re-read what he wrote:

          "That way, if the file was split into 10,000 packets, then the transmitter could send out a constant stream of packets module different primes and as soon as the receiver got any 10,000 of them "

          How can you split something into 10K packets then get *any* 10K of them? He's either saying he sends more than 10K or the receiver has to receive all 10K packets that were sent to reconstruct the file which I doubt is what he meant.

          • Yeah that could have been more clear. I'm suggesting sending a constant stream of packets until the target asks you to stop. No need for windowing or flow control, but pretty computationally expensive

  • This means my mobile internet speed might soon be up to 10 bps instead of the 2 bps I seem to get at the moment!

    • by Anonymous Coward

      It means that your ISP can oversell their bandwidth even more.

  • Except it's called MPEG video. And it's used for TV.

    MPEG also has a mode to recover the errors, but it's expensive, and when you're streaming, who cares? If your link sucks, you don't blame the stream.

  • by Anonymous Coward

    "Code On" has done well at generating some buzz.
    Unfortunately, the only details on their website are of the type "this is awesome", with no description of how the breakthrough works.

    http://www.codeontechnologies.com/technology/white-papers/

    I just wish they would post actual details on the encoding method. How does it compare to Fountain Codes such as RFC 5053?

  • In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet.

    Uh... could you simplify it just a little more?

    How does a "later-sequenced packet [...] include earlier-sequenced packets"?

    • They say "randomly" generated coefficients, but I'll bet you can use a psudo random number generator and pass in the same seed value to both the sender and reciever. Bam, now both sides have the same set of semi random coefficients to use when doing the fancy linear algebra.

      • by tibit ( 1762298 )

        The linear algebra isn't even fancy. They need to do a lot of linear equation solving, that's about it.

  • Now I can free up 4 of the 5 minutes it used to take burning through my monthly bandwidth to do something constructive.
  • by epine ( 68316 ) on Friday May 30, 2014 @10:10AM (#47128439)

    I've been parsing this kind of press release for a long, long time now. I can pretty much tell what we're dealing with by how hard it is to state the advantage of a new approach in narrow and precise language.

    That this blurb doesn't even disclose the error model class (error correction is undefined without doing so) suggests that the main advantage of this codec lies more in the targeting of a particular loss model than a general advance in mathematical power.

    Any error correction method looks good when fed data that exactly corresponds to the loss model over which it directly optimises.

    The innovators of this might have something valuable, but they are clearly trying to construe it as more than it is. This suggests that there are other, equally viable ways to skin this particular cat.

  • This sounds exactly like a Fountain Code [wikipedia.org] to me, which isn't exactly news.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...