Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware

Australian WiMax Pioneer Calls It a Disaster 202

Anonymous Coward writes "Garth Freeman, CEO of Australia's first WiMax operator, sat down at the recent International WiMax Conference in Bangkok and unleashed a tirade about the failings of the technology, leaving an otherwise pro-WiMax audience stunned. His company, Buzz Broadband, had deployed a WiMax network over a year ago, and Freeman left no doubt about what conclusions he had drawn. He claimed that 'its non-line of sight performance was "non-existent" beyond just 2 kilometres from the base station, indoor performance decayed at just 400m and that latency rates reached as high as 1000 milliseconds. Poor latency and jitter made it unacceptable for many Internet applications and specifically VoIP, which Buzz has employed as the main selling point to induce people to shed their use of incumbent services.' We've previously discussed the beginnings of WiMax as well as recent plans for a massive network in India.
This discussion has been archived. No new comments can be posted.

Australian WiMax Pioneer Calls It a Disaster

Comments Filter:
  • by rueger ( 210566 ) on Sunday March 23, 2008 @11:59AM (#22836762) Homepage
    For some time now I've been taking part in WIMAX trials here in Hamilton Ontario. [community-media.com] This too was trumpeted as a glorious thing that would change the face of our city, bring us into the high tech 21st century etc.

    In practice although WIMAX seems to work OK (aside from a real lag much of the time, which may just be bad server configuration by Primus Communications), My sense is that the company isn't really committed to it. I doubt that there will be a serious public roll out.

    The idea seems great - a wireless Internet connection that works wherever you are. The reality seems a bit less rosy, and my guess is that a city wide wireless network will need a good level of customer support - not Primus' strong point by a long shot.
  • by russotto ( 537200 ) on Sunday March 23, 2008 @12:41PM (#22837008) Journal
    NLOS performance depends on a number of things, including how well the underlying technology can handle multipath and otherwise distorted signals. But the main thing is probably frequency; the higher the frequency, the worse the NLOS performance. WiMax is designed to run at many different frequencies, and the article fails to mention which one was in use.

    The issues with latency and jitter, though, probably aren't as dependent on frequency.
  • AM Radio = Range (Score:1, Informative)

    by Anonymous Coward on Sunday March 23, 2008 @01:08PM (#22837164)
    AM radio is still around because it is ubiquitous, cheap and it's lower frequency gives it long range. That's the technical advantage over FM which offers clearer signals and more bandwidth but shorter range. If AM radio had no advantage, no amount of regulation would save it.

    Yes, you are one of those anti government nutters.
  • by multipartmixed ( 163409 ) on Sunday March 23, 2008 @01:10PM (#22837178) Homepage
    I am no RF expert either, but I have been on the receiving end of WiMax-ish technology, and the jitter was so bad, it was completely unusable for VoIP and even made ssh annoying at times.

    This was kit designed to work for up to 10 km (6 miles) and I had line-of-sight to the base station, which was about 150m (500 ft) away.

    Sky.. sky.. SkySomething. SkyPilot? Some kind of wierd meshy-network, I was also connected to the "master" tower, not a leaf.

    The problem, as it was explained to me, was that it has a collision/backoff algorithm not unlike that of 10-base-2 ethernet ("thin net"). So, the 50 (or so) neighbours I had, plus the leaf towers (2 of them, I think) were causing me to not get "slots" with the master on a timely basis. Hence, introducing jitter.

    So, your L2 protocol hypothesis is reasonable from my perspective, although we can eliminate poor radio performance as a direct cause. Changing the radio from broadcast to something like time or code division multiplexing would be a good solution for reducing jitter, but probably causes other problems (like decreased burst bandwidth and range).

    My solution? "*sigh* - cancel the wireless link and order me a up a T1"

    Wireless is nice because it's easy. But it sure ain't there yet.
  • Re:AM Radio = Range (Score:5, Informative)

    by SlashWombat ( 1227578 ) on Sunday March 23, 2008 @01:59PM (#22837484)
    AM radio spans roughly 1 MHz (IE: approx 530KHz to 1.6 MHz.) You CANNOT fit a broadband wireless service into that space ... furthermore, the resonant antenna length for 1/4 wave varies between (approx) 150 metres to 40 metres. Like to see you stick that out of the back of your Laptop.

    doubtful if you could effectively get one 54mbit channel in that space, plus, because it is NOT line of sight, someone a few miles away WILL interfere with your local transmissions.

    Low frequencies (below about 2 MHz) hug the ground, this means AM does not have line of sight issues. Some AM broadcast stations have service areas of hundreds of miles (kilometers) (radius)

    FM is 88.. 108 MHz. 1/4 wave here is roughly around 1 metre. Still a thumping huge antenna! These frequencies are considered line of sight, however, there is a small area extending beyond line of sight. Enough bandwidth for a few 54mbit channels.

    WiFi is generally at 2.4 GHz. Same band as Microwave ovens use. Has to do with the frequency of maximum absorbance of water. (Thus used in ovens!) 1/w wavelength approx 4 cm ... okay for Laptop, (easy)

    To get sufficient bandwidth, only UHF and up is really useful. But, get too high in the microwave band and the signal wont even get through a thin wall.

    So, there are trade offs that genuinely make sense for wireless broadband. (lots more reasons as well ...)
  • Re:Clearwire (Score:4, Informative)

    by Anonymous Coward on Sunday March 23, 2008 @02:00PM (#22837488)
    Lots of people seem to be confused about whether Clearwire is WiMax.

    My Clearwire device has an FCC ID of PHX-RSU2510F. The FCC docs say that it operates on 2.496-2.690Ghz. The chipset leads me to believe that it is an implementation of the Motorola Expedience [nextnetwireless.com] Wireless Broadband CPE.

    The Motorola RDM specs say that the device can operate in Expedience (up to 2W) or WiMax (up to 0.5W) modes. They also say that in Expedience mode it is a layer 2 smart bridge, while in WiMax mode it is a router with NAT, DHCP and firewall functions.

    Since my device acts like a layer 2 bridge, I conclude that it is in Expedience mode. Having just checked the Wikipedia article, I see that the first paragraph agrees:
    "Clearwire currently uses Expedience wireless technology, dubbed Pre-WiMax, transmitted from cell sites over licensed spectrum of 2.5-2.6 GHz in the U.S. and 3.5 GHz in Europe."

    So no, they use the WiMax frequency range, but they can transmit a stronger signal. That seems to be the main difference between the technologies.

    This Motorola promotional video [motorola.com] talks about some of the infrastructure and business justifications for using their Expedience gear:
  • by Anonymous Coward on Sunday March 23, 2008 @02:54PM (#22837808)
    You don't even need a proper diode. Just a rusty razor blade and a safety pin.
  • by gwolf ( 26339 ) <gwolf@@@gwolf...org> on Sunday March 23, 2008 @02:58PM (#22837830) Homepage
    I chose as my ISP in Mexico City E-go [ego.net.mx], co-owned by Alestra [alestra.com.mx], the Mexican AT&T subsidiary. It started offering WiMax connection in 2003 in limited areas of Mexico City (I understand nowadays it covers most of the Central, Western and Southern parts), before even WiMax was standardized. Clients get a NextNet [nextnetwireless.com] RSU unit [nextnetwireless.com], which is basically a network bridge.
    The latency complaints you state are simply not true - I get consistent ping response times of 100ms in average (with minimum response times of around 50ms) to hosts in Mexico City, 200ms to hosts in the USA. Yes, this is about 80ms higher than wired equivalents - but it's not so much of a killer. What I do get, of course, is way higher packet loss - About 5% when things are optimal, and it sometimes gets up to 50%. But yes, I'm located at a relatively poor reception area, at one of the lower-income (this means, no incentive to place many antennas nearby) neighbourhoods in the South of the city, where the mostly flat valley where most of the city is located begins to become quite hilly. The RSU unit does not provide any means (for the client) for monitoring connection, to help choose the best possible location. It only has five LEDs (and no, they are not blue, just an unfashionable old green. Bummer.) indicating signal strength, and I always get one or two of them. I have seen signal quality significantly better when at a five-leds connection.
    Prices and speed are more or less in-par with Mexico's near-monopoly TelMex; I'm paying about US$40 for a nominal 1Mbps/128Kbps connection (512K guaranteed, whatever that means). The upstream data flow _is_ shaped to 128k, but the downstream speed is not - when the network smiles on me, I get up to 2Mbps. It is not common, though.
    I understand E-go (back then called I-go, don't ask me why) was praised as the world-first massive WiMax deployment - Even before the standard was finalized. There are several aspects of the installed network that show clearly the gear is pre-standard (i.e. extreme sensibility to position changes - If I move my RSU over two centimeters, it has to resynchronize with the antenna. This process takes around two seconds, so no big deal).
    To me, clearly, the reason it hasn't got more popular is because it is owned by a relatively small company, and has not had the muscle to stand in front of Telmex's publicity machine.
    Of course, we benefit more than DSL users from having a low client density :) E-go owns 20MHz of spectrum, which allows it to give a theoretical maximum of 70Mbps to a given area. If many too people were to subscribe, each client would have much less effectibe bandwidth alloted.
  • by westlake ( 615356 ) on Sunday March 23, 2008 @03:31PM (#22838086)
    AM? I have never ever listend to a readiostation in my life. It is all FM here in Europe.

    Medium-Wave broadcasting in the U.S. evolved when the country was still significantly rural.

    Distances in the U.S. can defeat the European imagination.

    The 50,000 watt "clear channel" station could be heard across several states - and to istances of 1,000 miles under favorable conditions.

    AM radio had a distinct local or regional identity which persists to this day.

  • Re:AM Radio = Range (Score:5, Informative)

    by Jott42 ( 702470 ) on Sunday March 23, 2008 @04:53PM (#22838634)
    Minor nitpick: 2.4 GHz is not the frequency of maximum aborbance of water. The frequency of the maximum is temperature dependent, and the absorbance peak is very broad. Thus there is no need to use any special frequency. 2.4 GHz is used in microwave ovens due to that it was free to use, being an ISM band, and that the penetration depth is useful for cooking.
  • by MichaelSmith ( 789609 ) on Sunday March 23, 2008 @05:54PM (#22839056) Homepage Journal

    Even if we switch off of AM and FM and such to fancy digital encodings, every radio should have the ability to tune into old-fashioned AM signals built in

    No, because medium wave is just too bulky. You can get small, cheap FM only radios for this reason.

    And yeah, I grew up making crystal radios and small powered radios when I was eight or nine. Its hard to buy the nice open tuning gangs now. The old ways are going.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Sunday March 23, 2008 @07:52PM (#22840144)
    Comment removed based on user account deletion
  • by Anonymous Coward on Sunday March 23, 2008 @08:25PM (#22840470)

    Distances in the U.S. can defeat the European imagination.

    Right, because it's not like we have a comparable neighbour a few klicks to the west. Tell me, is San Fran further from or closer to Boston than Moscow is to Vladivostok?
  • by Ungrounded Lightning ( 62228 ) on Sunday March 23, 2008 @11:38PM (#22841838) Journal
    He claimed ... latency rates reached as high as 1000 milliseconds. Poor latency and jitter made it unacceptable for many Internet applications and specifically VoIP, which Buzz has employed as the main selling point to induce people to shed their use of incumbent

    Sounds like they didn't configure it right, on one or both of two issues.

    First: WiMAX has a frame rate that is an exact multiple of the 8000 frames/second rate of the telephone networks' digital carriers (and A/D converters). While this was obviously intended to allow it to carry telephone TDM signals and their associated timing (which normally isn't an issue for IP transport), WiMAX has its own, unrelated, timing issues that mandate the base stations be synchronized - to each other and preferably to a telephony network clock or a GPS-derived clock.

    The base stations assign timeslots to each remote. They measure the propagation characteristics and (depending on the sort of base station) may adjust signal strengths, modulation rates, and/or antenna aim for the associated timeslot to obtain good communication, and may pick a timeslot that is currently "quiet" on the antenna / antenna-aim appropriate for the remote in question.

    The problem is that multiple subscriber stations between two base stations (perhaps not adjacent ones) that are reusing a channel may both be "audible" to both base stations - perhaps due to using non-directinal antennas, perhaps due to reflections. If the base stations assign overlapping timeslots to their peered subscriber stations they will interfere. So the base stations try to assign their subscriber stations "quiet" slots - i.e. slots that don't already have interference from another nearby base station's remotes.

    Now that's just fine if the base stations' clocks are synchronized. The timeslots hold a constant relationship to each other and a quiet slot stays quiet. But if the base stations are not synchronized their relative framing drifts. So one base station's subscriber's slot may drift into that of another base station's subscriber, resulting in a drop of the link quality. Then the base stations readjust the configuration - perhaps moving the subscriber stations to new slots. But these do the same thing. Over and over. Result: Links keep flaking out and control traffic is massive.

    With the base stations synchronized and the subscriber stations carrying VoIP or other fixed-rate stream traffic, the stations will tend to hold on to quiet slots that march along with the stratum-III timing regularity of telephone carriers.

    The second Quality of Service issue is packet priority. The routers at both the subscriber and base stations should be identifying the VoIP (or other fixed-bandwidth streaming) flow and giving its packets priority over other traffic on the link. That way the (limited and constant bandwidth) voice packets can take the preallocated slots every time while any additional variable traffic waits for the necessary additional slot allocation. If this is not done, other traffic (such as file transfers and web browsing) will keep "stealing" the time slots out from under the time-critical VoIP / streaming packets, resulting in long and variable latencies - horrendous jitter. If it IS done (and the link is stable due to the base-station timing synchronization), the VoIP flows will have jitter characteristics virtually identical to those of telephony TDM networks.

    (This, by the way, is why "network neutrality" can't be reduced to "treat all packets the same" if you want to share the same IP network between streaming services such as video and VoIP and best-effort services such as file transfers and browsing.)

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...