


Google Funds Ogg Theora For Mobile 183
An anonymous reader writes "Google has decided to fund the development of Theora optimized for ARM processors. The article on the Open Source at Google blog notes the importance of having a universal baseline video codec for the Web: 'What is clear though, is that we need a baseline to work from — one standard format that (if all else fails) everything can fall back to. This doesn't need to be the most complex format, or the most advertised format, or even the format with the most companies involved in its creation. All it needs to do is to be available, everywhere. The codec in the frame for this is Ogg Theora, a spin off of the VP3 codec released into the wild by On2 a couple of years ago.'"
Dirac (Score:4, Interesting)
Re:Dirac (Score:4, Informative)
Re: (Score:2)
Only partially true - neglecting surrounding infrastructure not present in PC world, Dirac the codec seems to be ready, "production" kind of ready. BBC apparently uses it for internal needs / transmission.
Re: (Score:3, Insightful)
The real world needs a low bandwidth, US IP lawyer safe, free codec.
Re: (Score:3, Informative)
Re: (Score:3, Informative)
> why is there so much more love for Theora than for Dirac?
In order to play Flash video, or Silverlight video, browsers need a plugin.
Theora/HTML5 video can play in Firefox, Opera, Google Chrome (without any plugin) and IE (this browser alone requires a plugin).
(You can download that plugin for IE from here: http://code.google.com/chrome/chromeframe/ )
Ogg Vorbis, Speex, Theora and FLAC files can play on Windows and Linux platforms.
(Linux support is out-of-the-box, and you can get the support for Windows
Re: (Score:2)
In order to play Flash video, or Silverlight video, browsers need a plugin.
That's less of a disadvantage if PC makers install the plug-in on new PCs. This appears to be the case at least for Flash Player.
Theora/HTML5 video can play in Firefox, Opera, Google Chrome (without any plugin) and IE (this browser alone requires a plugin).
Doesn't Safari need the XiphQT plug-in [xiph.org] too?
Re: (Score:2)
Actually I think that is an indication that WMP is useless. Even if it isn't, WMP still remains useless. Use VLC.
Re:Dirac (Score:5, Informative)
CPU load. Theora is based on VP3, which is old. It was open sourced in 2004, but VP3 first shipped in 2000. Back in 2000, I had a 450MHz K6-2, and a lot of people I knew had slower machines. Now, a typical handheld is faster than that machine. Theora, like VP3, relies a lot on postprocessing passes for quality. This has the advantage that you can just not bother on slower machines, and get a slightly worse picture but with a lower CPU requirement.
Dirac, in contrast, needs at least a 2GHz CPU to play back. It's patent free and looks great, but the CPU load is huge. There have been efforts to offload a lot of it onto the GPU, which is nice for the desktop but doesn't help older machines and handhelds (except the latest generation). The BBC is working with vendors to get Dirac implemented in hardware, but it won't be ubiquitous for quite a few years.
Dirac also doesn't perform as well as Theora at low bitrates. This is very important for web streaming. Dirac is great for situations where bandwidth and CPU power are plentiful, but Theora makes more sense as a lowest common denominator solution. Ideally, you'd see both supported; Dirac for high quality, Theora for fallback.
Re: (Score:2)
Re: (Score:2)
Because the version of Dirac that gives significant advantage is nowhere near usable. I understand it's progressing nicely, though.
Re: (Score:2, Interesting)
There are rumors that a number of the major companies, except for Adobe are moving towards an agreement on Dirac in Ogg containers as the new standard at least for higher resolution/bandwith content. From what I understand, Google is also in favor of Dirac but wants Theora as fallback codec for mobile devices/phones with less bandwith and CPU resources -- at least for now. Both are great codecs and it looks like in the long run Dirac may become the standard codec for HD content.
Beyond awesome! (Score:5, Insightful)
This is beyond awesome, it's a game-changer. Google is one of those rare companies that singularly has the power to move markets, and it is revolutionary to see it do so in favor of consumers as it has. I understand the reasons why it has preferred H.264 over Theora, but it is really nice to see that it also understands the reasons why we should be preferring an open format instead. It's especially nice in an age of companies wanting to lock everything down and be the gatekeeper to everything, the major player in technology is pushing yet again to open things up.
Sometimes I think that Google is about the only company that "gets it." They understand that more people using the Internet translates to more money in their pocket. Even if those people are not using Google's services directly, they are increasing the market such that collectively, it has more opportunity, which in turn translates into more $$$. They seem to not really care if other people are making more money as well, which really separates them in my mind from other companies, who are of the "it is not enough that I succeed, but everyone else must fail" mentality.
Anyway, back to the topic at hand, one reason I've seen people regurgitate in why H.264 is the right way to go is because it is supported on hardware. Congratulations to Google on working to negate that argument.
Re: (Score:2)
They can afford to. With great power comes great responsibility.
As a footnote this is why, although I abhor Apples control fetish and find their latest coding restrictions for their products utterly insane, I applaud Jobs saying F U to Adobe: thanks to Apple, their is a web which works without Flash, thank the gods.....In one case, Google is preventing H264 from becoming so dominant that the web becomes unusable without H264, by embracing a less popular cod
H.264 uses half the bitrate of Theora (Score:3, Insightful)
Sometimes I think that Google just didn't "get it" in the first place when choosing H.264 for youtube.
YouTube started out on Sorenson H.263 because Flash Player supported that out of the box. When iPhone and new versions of Flash Player started to support H.264, YouTube reencoded uploaded videos in the new format. It was a happy accident that Chrome and Safari supported the same codec for the HTML5 <video> element. Now that platforms stuck on Flash 7 (namely Wii) have upgraded to a version with H.264, YouTube appears not to do H.263 anymore. Theora is somewhere between H.263 and H.264 in quality, roug
Re: (Score:3, Informative)
Re: (Score:2)
it was likely because of hardware support (Score:2)
Yes, the iPhone and iPod Touch mattered. But if Google had chosen Theora and not H.264 (not sure why it's an either/or, but you presupposed this) then YouTube would be a bit player in the mobile market right now because no mobile device could play it efficiently, because there is no Theora support in mobile chips right now.
YouTube's competitors were already supporting H.264 and thus they could work on mobile devices, and Google could have lost the mobile market space to them if they didn't move to cover thi
Re: (Score:2, Interesting)
H.264 is an open standard, so the fox is not crying.
Re: (Score:2)
Re: (Score:3, Informative)
It's an open standard. This is well known.
"The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 - MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content."
Just because it is patented doesn't mean it's not open.
It is the opposite side of the coin from something like WMV, which is proprietary.
Re: (Score:2)
VC-1 has been open for several years now.
Re: (Score:2)
I disagree. You are at the mercy of a group of people who control it. It's heavily patent-encumbered. Thus, it is not really open.
Re: (Score:2)
Yes, yes it is.
Just because it's proprietary code doesn't mean you can pick and choose the things the "tech community" call open.
It's not just confined to open source and completely transparent, royalty free projects and standards.
It's a well defined and understood word, as the opposite to "closed" standards that require reverse engineering or an NDA to work with.
Re: (Score:3, Insightful)
It hardly matters if the specs are published, if you can't implement them without paying for patent licenses.
Re: (Score:2)
That depends. It matters a great deal I think. It may be worth paying for the licence, and you have the option if you want it, unlike a closed format where your only option is reverse engineering. For systems like GSM, fully patented but open standards are in use that supply royalties to the original companies that developed them.
It's not always bad if the result is an open, but patented standard.
We can continue to push for royalty free standards and fully OSS-friendly codecs, but dismissing the middle grou
Re: (Score:3, Insightful)
Re: (Score:2)
*usually* means royalty free, but not always.
From your link.
Re:OGG newbie question (Score:5, Informative)
Theora is perhaps better than H.263 and MPEG-2 (from the mid 90s), but does not come close to H.264/MPEG-4 AVC or VC-1. (The frozen Theora bitstream format is lacking many features found in H.264 and VC-1.) Results might be similar to H.263+/MPEG-4 ASP.
The Ogg container also has some documented flaws [hardwarebug.org].
Note that there are many sites which perform misleading or flawed comparisons of the two; for example, they might compare the result from YouTube's H.264 encoder with a lossy source (which optimizes for encoding speed) to a locally ran Theora encode with a lossless source.
Since OS X 10.6 and Windows 7 come with H.264 decoding, and Windows 7 supports H.264 hardware decoding with compatible hardware from any source, I recommend sticking with H.264. (OS X 10.6's H.264 hardware decoding support appears to be limited to videos played in QuickTime X from MPEG4 or QuickTime container files on systems with nVidia 9400M GPUs or newer, even though Macs with capable GPUs started appearing in 2007.)
Re: (Score:3, Insightful)
Ogg may indeed be less than ideal, but that article exaggerates it's problems.
Re: (Score:3, Interesting)
Ogg may indeed be less than ideal, but that article exaggerates it's problems.
Which begs the question: why not use the free/open Matroska container instead? It can hold almost any media stream, including Theora, and supports multiple selectable sound and subtitle streams for a video stream. http://en.wikipedia.org/wiki/Matroska [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
$50 per post, $10 per reply. Standard rates.
Re: (Score:2)
But many features in H.264 are used in H.264 implementations today, giving it an advantage.
Re: (Score:2)
The fact that some features might be less useful does not mean they are all useless. And pretty much everyone who knows anything about video formats agree that Theora is sorely lacking and is unable to ever catch up with h.264 as things stand right now.
Paging Chris DiBona (Score:4, Interesting)
Chris DiBona of the Google open source group claimed [whatwg.org] that "If [youtube] were to switch to theora and maintain even a semblance of the current youtube quality it would take up most available bandwidth across the Internet."
This was shown to be false [xiph.org].
Mr DiBona then mysteriously vanished without trace.
Could he please manifest and either (a) support his claims or (b) concede his error?
Thanks ever so much.
Re: (Score:2)
AFAIK, vorbis/theora does fine with low and medium quality video (to and including 360p I think, but I'm not sure) but has more problems with file weight and brandwidth usage for high qality videos.
So what I understand is that they promote Vorbis/Theora for "low-end" video streaming and prefear H.264 for "high-end" videos.
I'm really not sure about that, it's just the result of my tiny experimentations with converting h264 c
You just got it (Score:2)
They picked this shitty format, so be happy and shut up.
Re: (Score:2)
While I disagree wholeheartedly with Chris's statement, I also think that Greg's comparison was not a very good one. The only thing he compared was a computer-generated, low-motion, pristine and lossless source. How many of those have have you seen on YouTube? Where is the noisy, poorly-lit video of some kid complaining about his life? Where's the shaky video someone shot on their cell phone? Where's the re-re-re-encoded video from people who re-uploaded the same video other people uploaded? Where's t
Re: (Score:2)
> No, it was not representative at all.
How representative of the stuff that actually gets large numbers of hits are your examples? Inefficient transmission of a noisy, poorly-lit video of some kid complaining about his life is unimportant if it only gets downloaded nine times.
The demerits of the beach (Score:2)
Inefficient transmission of a noisy, poorly-lit video of some kid complaining about his life is unimportant if it only gets downloaded nine times.
Nine? It's over nine thousand. [youtube.com]
Re:Paging Chris DiBona (Score:5, Insightful)
The Xiph's group rebuttal page does nothing to show Chris DiBona's contention was false. As I have said before, through either ignorance or malice the Xiph guys dropped the ball on their comparison.
1. Their larger Theora video has an audio track that's about 64kbps. The H264 video from YouTube has a 128kbps audio track (the numbers are rough since they're VBR tracks). This means for every second of video the Theora video has an extra 64kbps to throw at the video. While 64kbps might not sound like much that's 13% of the file's total bitrate. This gives the Theora track a 13% data rate advantage over YouTube's video. Every objective test I've ever seen has gauged AAC and Vorbis to have roughly equivalent audio quality at the same bitrate. If they want to make an actual comparison they would need to use a 128kbps Vorbis audio track.
2. The Ogg file format really sucks for streaming over the internet. The Ogg container tries to be too general of a format when it's only being used to represent time based media. FFMPEG developer Mans has a lot to say [hardwarebug.org] about the container format. Thanks to sample and chunk tables in the MPEG-4 format seeks are really efficient over the network since the header gives you an index to all of the samples in the file. A single HTTP request or file seek is needed to seek to a particular time in the file, even if the full file hasn't been downloaded yet. For services like YouTube and Vimeo, especially in context of mobile connections, Ogg's inefficiency is a real detriment.
3. MPEG-4 files with H.264/AAC tracks can be handled by the Flash plug-in as well as natively in browsers. YouTube and Vimeo and others can encode a single version of a file and serve it up to older browsers using Flash and newer browsers using the HTML5 video tag. If Ogg is added as an option that is another step in your decision tree. For individual requests this extra logic might be trivial but when you're handling millions of requests per hour this really adds up.
I'm not defending any hyperbole Chris DiBona was spouting off about the internet grinding to a halt but Ogg and Theora are simply not optimal for a "baseline" media format. It's only real feature is the fact it is open source and doesn't require a license. This isn't the most useful feature in today's world because all of the mobile devices that would be served Theora files already have licenses for MPEG-4. Tens to hundreds of millions of phones already support MPEG-4. They're using MPEG-4 to do send video over MMS and e-mail and for watching video on the web. Theora improve any of those experiences.
Re: (Score:2)
>The Ogg file format really sucks for streaming over the internet.
No, it is rather good for streaming. It is actually a bit weaker for downloaded or progressive downloaded (youtube-style) content, but not horrible even there.
>It's only real feature is the fact it is open source and doesn't require a license.
It may not be optimal, but it is good enough, and so the fact that it is Free is enough.
Re: (Score:2)
It's only real feature is the fact it is open source and doesn't require a license.
Yeah buddy, that's the difference which make a difference. Theora is good enough(tm). And it's only getting better.
Re: (Score:3, Interesting)
So a single browser will support an extension to the Ogg format to give it the ability MPEG-4 has had since its inception? They're only eleven years behind MPEG-4 part 12. Are they going to roll out edit lists sometime around 2020? The indexing only works if you go through all of your existing Ogg content and rebuild it using the new keyframe indexing. If YouTube had bet the farm on Ogg a year ago they would be currently going back through their years worth of archive video to rebuild it to add indexes. Eve
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
"Theora is still not as high quality as later codecs."
Indeed. However, I didn't say otherwise, the xiph.org page doesn't say otherwise and that isn't what your original assertion said. You are answering in a manner difficult to distinguish from being evasive.
Could you please address the original questions, and the findings detailed on that page?
Re: (Score:2)
A few things:
1) Please stop citing that. It's not fair to compare highly tweaked Theora encodes to untweaked H.264. Put them on level ground at least.
2) That quote is clearly hyperbole.
3) For Theora to maintain equal video quality in the sub-1mbit range, it would take at least 30% higher bitrates on most videos. For some videos that H.264 compresses well, it could be 80% higher bitrates. H.264 does extremely well with fading, single-colour areas that aren't updated often (slides, captions), and preserving s
Re: (Score:2)
The comparison was typical to typical - "highly tweaked" Thusnelda is actually the reasonable way to encode Theora.
Re: (Score:2)
But highly tweaked H.264 is amazing, and there's GUI tools that make it ridiculously easy to get set up.
I'm just saying... tweak both, or neither. If you spend the same amount of time tweaking H.264, you end up with incredibly good quality.
Re: (Score:2, Interesting)
XML serialization of HTML is still there. XForms... I never heard it worked in any browser sans some 3rd party plugins.
So, please, describe what's rubbish in HTML. Those new elements are _needeed_ anyway. It's better to have them than to implement anew every time you need them.
I don't understand what's you problem with audio and video either. They are here anyway with flash. You can disable flash. You can disable audio/video if you really want to. Your problem is?
Re: (Score:3, Insightful)
Canvas is not needed. You can create dynamic, animated graphics using the existing SVG standard.
And yes, html5 brings back the integration of style and content.
It is defined to maintain backwards compatibility by keeping some elements that are counter to the philosophy of html and yet fails to preserve the definition and presence of those elements. It is even halfassed at meeting its stated goals.
Html5 spec does not specify a single DOM structure, unlike html2, this means that IE is going to continue to req
IE: The worst thing to happen to the Web. (Score:2)
Canvas is not needed. You can create dynamic, animated graphics using the existing SVG standard.
But can you let the user do this creating? How would one write a photo editor or pixel art editor with SVG and no <canvas>? And how well does SVG handle sprite graphics in the style of 8-bit or 16-bit consoles?
And yes, html5 brings back the integration of style and content.
It was still there in transitional XHTML 1.
Html5 spec does not specify a single DOM structure
What exactly do you mean by this? If the HTML5 standard cites the DOM Events spec [w3.org], then it supports addEventListener and the like. Microsoft made a specific choice not to support DOM Events, a W3C Recommendation published in 2000, and therefore not supp
Re:IE: The worst thing to happen to the Web. (Score:4, Insightful)
How does SVG handle sprite graphics? Far better than canvas does. To move a sprite, you can transform its position, with a canvas you have to re-composite the image. The sprite itself can be a traditional bit-mapped image if desired.
Pixel art editing is somewhat possible. Canvas can generate a bitmap of the output, but SVG can not without and external converter. As you add pixels (really rectangles in the DOM of the svg) you dramatically explode the size of the DOM tree causing performance issues. With good partitioning algorithms, this can be partially mitigated by combining adjacent like pixels into single DOM objects.
No web browser ever supported xhtml2, but then the only after the xhtml2 spec was shelved did browsers start to roll out any significant support for html5 either.
Re: (Score:2)
To move a sprite, you can transform its position, with a canvas you have to re-composite the image.
But if you transform a sprite's position, the web browser has to re-composite the image anyway. The advantage of <canvas> is that it lets script take PNG screenshots of a composited image and pass them around.
As you add pixels (really rectangles in the DOM of the svg) you dramatically explode the size of the DOM tree causing performance issues. With good partitioning algorithms, this can be partially mitigated by combining adjacent like pixels into single DOM objects.
Or you can just do it the easy way with a <canvas>.
Re: (Score:2)
1: XForms are a huge improvement over traditional html forms with contol over view, validation and data.
2: The standardization on xml events and DOM. This is a huge issue.
If you have ever done any serious work in AJAX web apps,
you will know that there are IE uses a different DOM structure than everything else.
This means they you have to:
A: know all the idiosyncratic differences and how to identify and code around them.
B: test everything with rediculous thoroughness and apply hackish patches to get things wo
Re: (Score:3, Insightful)
1. XForms are a huge improvement which currently does not work. Good or bad bad it is.
2. What? There already is a standard. Microsoft decided it does not need to do it the way it's written. Why would you think they'll implement something else?
3. XHTML5 (XML serialization of HTML5) can include MathML and SVG too. Your point is? HTML serialization will be able to do that or so I heard.
4. Predefined styles are backward compatibility. I don't like them either (aside from, maybe, b/i/etc) but I doubt browser ven
Re: (Score:2)
examples? the copyright, error, example, issue, note, search, and warning style class is defined to have specific semantic meaning in html5. The time, meter and output tags act similarly with a few differences. Completely over ridding developer control. With xhtml2 the role attribute has more flexibility without breaking separation of concerns.
Re: (Score:2)
I don't think you understand what I am saying. With these semantic style classes, meaning is applied to the meaningless. Style classes are for style, not meaning. It also applies markup tags, tags are defining the structure of a document, not its meaning.
what does the style "copyright error" mean? In HTML5 it means that section is the copyright notice section for your page, it is also an error display area, even though it may be intended to style an article on inadvertent copyright infringement.
Re:WHATWG: The worst thing to happen to the Web. (Score:4, Insightful)
Meh, getting 503s trying to log in. Sorry for the A/C Post.
XHTML was interesting and lovely, and no one gave a shit. Ideology loses to practicality in almost every case until ideology is reformed to conform to reality.
I think you'll find that if you look at HTML5, there's not a lot of presentational bits in it. Most of that is still reserved for CSS.
You'll also find that the cases where things are defined at least gives the web a unified a way to handle real web pages that exist *today*. Right now, a new browser would have to reverse engineer what Chrome, FF, IE and friends did in order to know how to render the web. HTML5 at least identifies the reality that exists.
You note that JS is being used to do things it shouldn't. On what grounds? Who are you to tell what should and shouldn't be done with a language and in a given environment? The practical fact is that folks *are* doing amazing things with JS. If you don't like the language, that's your problem. If you don't want it on your computer, don't use those websites. JS *does* lots of things today, and there's no reason to limit it artificially. You want something better out there? Come up with a solution and push it.
Your final comment notes that web developers aren't interested in quality and technical superiority. You're right. Why should they? What they care about is getting a product out. You're asking them to solve problems that they don't have.
Tks,
Jeff Bailey
(an employee of Google, not speaking for Google at all)
Re:WHATWG: The worst thing to happen to the Web. (Score:4, Interesting)
XHTML is easy to generate, manipulate, and validate? Have you ever written software that tried to handle XHTML? It's as complex as writing an XML handler which is not trivial to do properly. Things like tag attributes add a whole extra layer of complexity to getting a machine to actually understand the document. Your contention that HTML5 is regressing with respect mixing presentation and content is ignorant and borderline stupid. It makes me wonder if you've even read the spec. HTML5 eliminates presentation tags like center, tt, and the font tag. It does add tags that make it easier for user agents to determine the context of different parts of a document.
For instance the header, footer, and article tags let the UA figure out in a search which parts of the document they ought to pay more attention to. Search engines can focus on text inside article tags and ignore text matches in the footer or nav tags for instance. Screen readers don't need to try to parse pages based on tag attributes like they have to with HTML4/XHTML. A screen reader can know that it doesn't need to bother reading the contents of the footer or it can more easily provide a verbal menu based on the sections of the document.
XHTML5 (Score:2)
We forced the documents to validate before we persisted them
Which is still possible with HTML5. It has two surface forms, XML and a pseudo-SGML, which parse to the same DOM. The user can enter XHTML5, and you can still validate that. But the advantage of HTML5 is that its pseudo-SGML parser is more clearly specified, so that even tag soup translates to a well-defined DOM. If the user enters pseudo-SGML, in which case you can parse that into a DOM and then serialize it back to XHTML5.
Re: (Score:2)
Are you seriously saying that you find parsing XML difficult? ... Handling XML (and, by extension, XHTML) is a trivial task.
Depends on your point of view. Using an already existing XML parser in an application can be pretty easy. Writing a fully compliant XML parser is far from simple. If it would be so simple, why are most XML libraries fairly large and complicated pieces of software?
It is pretty simple to write a non-validating parser for a limited subset of XML, but if you include things such as namespace support, XPATH support, not to mention validation by DTD, Relax-NG and/or XML Schema, the parser suddenly becomes very com
Re:WHATWG: The worst thing to happen to the Web. (Score:4, Insightful)
Tags like header and footer denote semantics which are part of the content (content denotes what is displayed, not how it is displayed). They don't say "the footer should be in a 10pt font" -- that is up to the CSS. They (and the other layout elements) denote the semantics of what is currently being done in an ad-hoc way. They allow things like search engines to identify relevant information (e.g. ignore the footer sections).
HTML5 is looking to be a great standard. Not perfect by any means, but it is a good step forward (giant leap?) in the right direction. Having a defined way of processing HTML5 and having an XML variant (XHTML) unified to the same DOM makes it easier to choose how you want to write/generate your HTML content.
There were some nice ideas in XHTML2, but it didn't pan out. That does not mean that some of those ideas cannot be integrated into HTML in the future like section has been.
It is also good to see Google seeking to improve video support.
Gradually, HTML5 support will improve, as will support for CSS3 as these standards get finalised. Also, audio and video support will stabilise as well. These, with all the advances in support for MathML, SVG, SMIL and other standards as well as performance improvements for JavaScript and hardware-accelerated page rendering mean that the web is only growing in strength.
As for JavaScript, it is just a scripting language -- you can do anything with it and hook it to anything. You do know that the "fetch more comments" feature of slashdot uses javascript? You do know that thunderbird and firefox make use of javascript for binding their UI together?
Header and footer; JavaScript deployment advantage (Score:2)
There is no need for elements like "header" and "footer" in HTML5. The exact same functionality is better represented as traditional divs or spans with a class specified. End of story.
So how are you going to get thousands of web sites to use the same class= for a header or footer so that the user can apply a user stylesheet to every site's header and footer?
Use C, C++, Python, Ruby, Perl, C#, OCaml, Haskell, Scheme or Common Lisp for even a week, and you'll immediately see how fucked up JavaScript is, and how pathetic of a language it is for development of code that exceeds two or three lines in length.
It's interesting that you mention Scheme and Common Lisp. The common opinion on the web is that JavaScript has Lisp semantics with C syntax. In fact, I'd wager that if M-expressions had ever been properly implemented in Lisp, they would look a lot like JavaScript. Another advantage of JavaScript is that end users might not have privil
Re: (Score:2)
Anyone who supports the "header" and "footer" elements, among several others, supports content mixed with presentation. It's a regression.
Very wrong. Header and footer elements denote document structure, nothing else. Of course they will have default styles, but that can be overridden like everything else. Actually header and footer elements are much more sensible than using divs with classes or ids. A header is specified to be used for certain parts of a document, and can be correctly interpreted by software such as screen readers and braille displays. How do you do that with divs? The id/class is an arbitrary string, not something that such
Re: (Score:2, Insightful)
Javascript does not magically do AJAX possible. It works because browser does it and give access to needed objects to javascript. This can happen with any language integrated with a browser.
Re: (Score:3, Interesting)
Many of its new elements have gone out of their way to bring back the combination of presentation and content that we've tried to get rid of for over 15 years now.
Absolutely not true. The new tags are for things like articles, sections, and so on. They provide more semantic information, not less. The HTML 2 approach removed all of these as redundant because you can implement them with class attributes. The problem with this is that one site will use <div class="article">, another will use <div class="post">, a third will use <div class="blog">, and this makes it very difficult for the browser to render them in a consistent way and for other user
Re: (Score:3, Insightful)
The article, section, header, footer and aside tags don't have any presentation information (except that section/section/h1 is similar to using h2). A HTML5 browser should only have the following presentation logic done via CSS:
article, section, header, footer, aside { display: block; }
Anything more fancy is done by CSS. Which means that you can have a single CSS theme file (WordPress, ZenGarden, whatever) that is used by *any* website that uses HTML5 markup.
Re: (Score:2)
In reality, do you know what's going to happen with the element? In order to make it render properly, people will have to specify a class or style, and fix the rendering using CSS.
article, nav, section, header, footer and friends are about markup, not about rendering. In terms of normal rendering it makes no difference if you use <article> or <div class="article>, in terms of marked it however makes a huge different. One of the core problem for me with the Web today is that there is simply no to tell the browser what is the actual content and what is just a navigation bar. This in turn makes some webpages on some devices pretty much unusable (Wikipedia on a PSP for exampl
Re: (Score:2)
In reality, do you know what's going to happen with the <article> element? In order to make it render properly, people will have to specify a class or style, and fix the rendering using CSS. There's really no beneficial difference between <article class="..."> and <div class="...">. Most sensible people will just use divs, since they're supported by just about every browser still in use today.
That's precisely the point. They will be presented in different ways (like I said, they're not presentation tags), but the browser will know that they are articles, not something else. If it's on an eInk device, for example, it may decide to split a big page on article breaks. It may display a list of articles in a side view. Anything parsing the HTML for some purpose other than immediate display will know that these are articles, and not just some arbitrary level of detail in a hierarchy.
You seem to
Re: (Score:2)
If Google was serious... (Score:3, Insightful)
Re: (Score:2)
Not only because it's old; because it's old and there are newer codes that are better. This is why, for example, Apple uses AAC, even though they could have just used MP3 in a DRM container (back when they still applied DRM to downloads).
New video codes have brought clearly perceptible improvements, which is why we've seen MPEG-2, MPEG-4 ASP, and MPEG-4 Part 10 AVC/H.264 within the past 15 years (and H.265
Re: (Score:2)
In fact, they could have used Ogg Vorbis, whose bitstream was fixed as of 2000 and whose encoder was finalised in 2002 - AAC is only just getting to Vorbis levels of quality.
Re: (Score:2)
Early tests show [wikipedia.org] Apple AAC to be comparable to Vorbis.
Re: (Score:2)
No. codec/encoder/decoder refers to a specific implementation or implementations. "code" here refers to the encoding format itself.
Re: (Score:2)
We don't want to go back to codec hell... (Score:4, Interesting)
Theora lost because it wasn't as good as H.264 and it's still not as good as H.264 bit for bit. The only reason why the opensource world support it isn't because it's better, but because it's the only "open source friendly" option. Sorry, but that just because it fits an idelogoy doesn't mean much to the part of the world that uses the product. It's like suggesting that a professional 3D/video shop use Blender instead of Maya or Cinelerra instead of Final Cut Pro or Avid. The professionals are going to take a look at it for a while and go, "Nice toy, now I've got to get back to work."
If the opensource world wants Theroa to succeed, you're going to have to produce something that's better than H.264 end of story. Until then the people are working in Video are going to continue using H.264 because it's everywhere and is currently the best mainstream codec available.
I worked in Video production in the late 90's through about 2005. H.264 was a godsend when we finally had a single Codec that was adopted by pretty much all recording hardware and editing software. Before it was a Codec Hell. Nobody I talk to in the industry, and I still have a lot of friends who work everywhere from their basement to large production shops, have any interest in embracing Theora or anything else. They only want to support 1 Codec that works everywhere, and that's H.264. Even if it costs them a little bit of money. Because whatever it costs them is likely cheaper than the headaches of having to support multiple formats.
Now, if Theora or some other patent free format gets to the point where it can offer at least the same (really it has to be BETTER than H.264 in features and quality) only then will the production houses be interested in switching. And by better, offer at least the same quality as H.264 at a lower bit rate than H.264.
Re: (Score:2)
Now, if Theora or some other patent free format gets to the point where it can offer ...
That brings up a question I've had in my mind for a while. I don't know how codecs/formats work, but can someone tell me if the theora format can be improved to the point that it rivals H.264, while still being the theora format? Or at some point is it necessary to call it new format? And if so, what effect would a new, better theora-derived format have if the world, hypothetically, had standardized on theora?
Also, how much of a difference does the quality of the codec used to create theora videos make? I r
Re: (Score:2)
Dont get sucked in by the group-think. Theora already rivals H.264 - in real most applications it's highly unlikely anyone would ever notice the difference.
And yes, encoder and decoder development is at least as important than the underlying algorithm.
Re: (Score:2)
I worked in Video production in the late 90's through about 2005. H.264 was a godsend when we finally had a single Codec that was adopted by pretty much all recording hardware and editing software. Before it was a Codec Hell. Nobody I talk to in the industry, and I still have a lot of friends who work everywhere from their basement to large production shops, have any interest in embracing Theora or anything else. They only want to support 1 Codec that works everywhere, and that's H.264. Even if it costs them a little bit of money. Because whatever it costs them is likely cheaper than the headaches of having to support multiple formats.
I worked in GUI applications in the 90's through about 2005. Windows was a godsend when we finally had a single OS that was adopted by pretty much all hardware and software. Before it was an OS Hell. Nobody I talk to in the industry, and I still have a lot of friends who work everywhere from their basement to large production shops, have any interest in embracing Linux or anything else. They only want to support 1 OS that works everywhere, and that's Windows. Even if it costs them a little bit of money. Bec
Re: (Score:2)
They only want to support 1 OS that works everywhere, and that's Windows.
You had a point until right there. Windows doesn't work everywhere, and the places it does then it is only for some definitions of "work".
Re: (Score:3, Insightful)
``Theora lost because it wasn't as good as H.264 and it's still not as good as H.264 bit for bit. The only reason why the opensource world support it isn't because it's better, but because it's the only "open source friendly" option. Sorry, but that just because it fits an idelogoy doesn't mean much to the part of the world that uses the product. It's like suggesting that a professional 3D/video shop use Blender instead of Maya or Cinelerra instead of Final Cut Pro or Avid. The professionals are going to ta
Re: (Score:2)
On the contrary. It's just your definition of "best" is inaccurate.
Vorbis had no installed base, was computationally more complex, and quality was only slightly better than the best MP3 encoders, and even then, not in all cases (it really falls apart on some audio, where MP3 is
Show me your cards (Score:2)
Quality is not the reason why Theora lost to H.264, just like quality wasn't the reason why Vorbis lost to mp3.
Then tell me why Theora lost. Don't stand on the quick, weightless, mod-up to +4 "Insightful."
Re: (Score:2)
see http://slashdot.org/comments.pl?sid=1613660&cid=31801318 [slashdot.org]
Re: (Score:2)
There are loads of reasons.
1) Convenience: Theora isn't supported by Windows and OS X by default, so those users won't use it unless it has something special to offer.
2) Competition. Supporting Theora in those OSes would level the playing field for FOSS that can't legally support the dominant but proprietary codecs, so MS/Apple would give away an important competitive advantage if they were to support it. That won't happen.
3) Too little, too late.
4)
5)
6) Quality.
and probably more. Quality alone isn't decisiv
more codec support (Score:2)
Where space and power matters most (pocketable devices), I'm just not entranced by support for more codecs that aren't efficient.
Some day it'll be reasonable for the device in your pocket to play video in any format you find it in. But for now, I think I'd rather the effort were concentrated on maxing out the efficiency (bits and power) of the codecs that are already in wide use.
Technical Objections - Does Not Apply? (Score:2)
Technical Objections To the Ogg Container Format:
http://news.slashdot.org/story/10/03/03/1913246/Technical-Objections-To-the-Ogg-Container-Format [slashdot.org]
[I really don't know]
Is this a branch not discussed in the above article?
H.261 (Score:2)
We already have an unpatented, royalty-free, unencumbered, lowest-common-denominator video codec for use on the internet: H.261.
H.323 specifies it as the lowest common denominator for video-over-IP, so all video phones already support it, including hardware implementations. It was published in 1990 - twenty years ago - so it is as patent-free as you can get. And it's published by the ITU, so the specification is freely available.
Re: (Score:2, Interesting)
What does MPEG-LA say about re-licensing? (Score:3, Interesting)
I've recently read the short description of the MPEG-LA license terms for broadcasters. (Not the full licenses, though)
If I understand it correctly, by purchasing a license, you're allowed to use h.264 for YOUR distribution, but the terms does not mention re-licensing to third party. To my best guess, that would mean re-licensing is not allowed.
But, and here's the catch, when YouTube-videos are embedded into other sites (Facebook, or Joe Shmoe:s blog) isn't that a form of re-sale to third party?
Can someone with more insight comment on this?
Re: (Score:2)
Youtube continues to serve the videos. They're only "linked" (embedded) from other sites.
Additionally, non-paywalled online videos in H.264 are gratis for the next several years at least, so it would make no difference.
VP3 base? (Score:2)
Didn't Google buy On2? Why aren't we seeing open VP7 and VP8?
Re: (Score:2)
Re: (Score:2)
I'd guess it's mostly about 90->50% scenario; current ARMs are quite powerfull, certainly enough for video in resolutions which make sense on devices that are likely to play them. But for those devices battery is probably the most limiting thing nowadays...and ARM has great ways of conserving it; if it is given the chance, if not on constant high cpu usage.
Re: (Score:2)
Re:Once again (Score:4, Informative)
Neither Apple's or Microsoft's products support Flash out of the box either, yet Flash is fairly ubiquitous right now.
Really? The last two Macs I've bought have come with Flash preinstalled. Not sure about Windows, but someone mentioned a few days ago here that their new Windows machine had Flash preinstalled, although it's not clear whether this was done by MS or the OEM.
Re: (Score:2)