Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Social Networks Communications Technology

Twitter Throttling Hits Third-Party Apps 119

Barence writes "Twitter's battle to keep the microblogging service from falling over is having a dire affect on third-party Twitter apps. Users of Twitter-related apps such as TweetDeck, Echofon and even Twitter's own mobile software have complained of a lack of updates, after the company imposed strict limits on the number of times third-party apps can access the service. Over the past week, Twitter has reduced the number of API calls from 350 to 175 an hour. At one point last week, that number was temporarily reduced to only 75. A warning on TweetDeck's support page states that users 'should allow TweetDeck to ensure you do not run out of calls, although with such a small API limit, your refresh rates will be very slow.'"
This discussion has been archived. No new comments can be posted.

Twitter Throttling Hits Third-Party Apps

Comments Filter:
  • Disclaimer: I'm not familiar with the Twitter API. If the assumptions I make are wrong, I apologize.

    Over the past week, Twitter has reduced the number of API calls from 350 to 175 an hour.

    Okay, if you're making that many calls to Twitter then there might be an inherent flaw with their RESTful interfaces. I think for a long time, the "web" as we know it has suffered from the lack of the Event/Listener paradigm. This is a pretty simple design concept that I'm going to refer to as the Observer [wikipedia.org]. Let's say I want to know what Stephen Hawking is tweeting about and I want to know 24/7. Now if you have to make more than one call, something is wrong. That one call should be a notification to Twitter who I am, where you can contact me and what I want to keep tabs on--be it a keyword or user. So all I should ever have to do is tell Twitter I want to know everything from Stephen Hawking and everything with #stephenhawking or whatever and from that point on, it will try to submit that message to me via any number of technologies. Simple pub/sub [wikipedia.org] message queues could be implemented here to alleviate my need to continually go to Twitter and say: "Has Stephen Hawking said anything new yet? *millisecond pause* Has Stephen Hawking said anything new yet? *millisecond pause* ..." ad infinitum. I'm not claiming Twitter does this but a cursory glance at the API [twitter.com] looks like it's missing this sort of Observer paradigm that allows for the scalability they need.

    I'm not leveling the finger at Twitter, it's a widespread problem that even I have been a part of. Ruby makes coding RESTful interfaces so easy that it's very very tempting to just throw up a few controllers that are basically CRUD interfaces for databases and to call it a day. I suspect that Twitter is feeling the impending pain of popularity right about now ...

  • by Anonymous Coward on Wednesday July 07, 2010 @04:06PM (#32830742)
    They're working on it. They have a streaming API in beta right now.
  • by Late Adopter ( 1492849 ) on Wednesday July 07, 2010 @04:17PM (#32830908)
    Bad form to reply twice, but I forgot something rather crucial: the "right" way to do this sort of thing might be to offer notifications over XMPP (i.e. Jabber/GTalk). Twitter used to do this, but they couldn't figure out how to keep it running under heavy load (which I would consider a fault on their end rather than as a fault in XMPP as a solution).

    XMPP would at least take advantage of established listening pathways (GTalk clients on mobile devices, etc).
  • Re:175/hr is slow? (Score:4, Informative)

    by Alex Zepeda ( 10955 ) on Wednesday July 07, 2010 @04:38PM (#32831240)

    The API rate limit is per hour per user (if authenticated) and per IP if not authenticated. Unfortunately the Twitter API does not allow you to aggregate requests even if their web site does (e.x. status updates for all of the people I'm following and all of the things people I'm following have retweeted). If you go through the API docu, you'll find all sorts of horrid seeming inefficiencies and awkwardness with the API.

    For instance when you request a status (or a list of statuses or whatever) you'll get back: the contents of the tweet, the user name, user id, URL for the user avatar, URL for the user's profile page background image, whether that user is following you, their real name, the number of tweets that user has made, and so-on and so forth. A lot of this information could easily be cached by the client, but is instead sent for every tweet you get back.

  • by Animats ( 122034 ) on Wednesday July 07, 2010 @04:40PM (#32831262) Homepage

    Now if you have to make more than one call, something is wrong. That one call should be a notification to Twitter who I am, where you can contact me and what I want to keep tabs on--be it a keyword or user.

    That's not easy to do on a large scale. A persistent connection has to be in place between publisher and subscriber. Twitter would have to have a huge number of low-traffic connections open. (Hopefully only one per subscriber, not one per publisher/subscriber combination.) Then, on the server side, they'd have to have a routing system to track who's following what, invert that information, and blast out a message to all followers whenever there was an update. This is all quite feasible, but it's quite different from the classic HTTP model.

    It's been done before, though. Remember Push technology [wikipedia.org]? That's what this is. PointCast sent their final news/stock push message [cnet.com] in February 2000. There's more support for "push" in HTML5, incidentally.

    If you really wanted to scale this concept, the thing to do would be to rework a large server TCP implementation so that it used a buffer pool shared between connections, rather than allocating buffers for each open connection. The TCP implementation needs to be optimized for a very large number of mostly-idle connections. Then implement an RSS server with slow polling, so that the client makes an RSS query which either returns new data, waits for new data, or times out in a minute or two and returns a brief "no changes" reply. Clients can then just read the RSS feed, and be informed immediately when something changes. A single server should be able to serve a few million Twitter-type users in this mode.

    The client side would encode what it was "following" in the URL parameters. The server side needs a fabric between data sources such that changes propagate from sources to front servers quickly, and then on each front server, all the RSS feeds for all the followers for the changed item get an update push.

    There's a transient load problem. If you have 50,000,000 users, each following a few hundred random users, load is relatively uniform and it works fine. If you have 50,000,000 people following World Cup scores, each update will force 50,000,000 transactions, all at once. All the clients get a notification that something has changed. So they immediately make a request for details (the picture of someone scoring, for example). All at the same time. However, if you arrange things so that the request for details hits a server different from the one that's doing the notifications, ordinary load-balancing will work.

  • by Miseph ( 979059 ) on Wednesday July 07, 2010 @05:13PM (#32831942) Journal

    Debit card processing systems require real-time access to the full network for every single transaction. PIN numbers cannot be cached locally, and must be validated before completing the transaction.

  • by DragonWriter ( 970822 ) on Wednesday July 07, 2010 @05:14PM (#32831956)

    Okay, if you're making that many calls to Twitter then there might be an inherent flaw with their RESTful interfaces. I think for a long time, the "web" as we know it has suffered from the lack of the Event/Listener paradigm. This is a pretty simple design concept that I'm going to refer to as the Observer [wikipedia.org].

    For messaging architectures (like, say, the internet), the pattern is usually described as "Publish/Subscribe". All serious messaging protocols support it (XMPP, AMQP, etc.) and some are dedicated to it (PubSubHubbub). The basic problem with using it the whole way to the client is that many clients are run in environments where it is impractical to run a server which makes recieving inbound connections difficult.

    There are fairly good solutions to that, mostly involving using a proxy for the client somewhere that can run a server which holds messages, and then having the client call the proxy (rather than the message sources) to get all the pending messages together.

    I'm not leveling the finger at Twitter, it's a widespread problem that even I have been a part of. Ruby makes coding RESTful interfaces so easy that it's very very tempting to just throw up a few controllers that are basically CRUD interfaces for databases and to call it a day.

    Given what's been published about Twitter in the past (including them at one point building their own message queueing system because none of the existing ones that they tried seemed adequate), I don't think what they've done is as simplistic as that on the back-end, though they be forcing third-party apps through an API which makes it seem like that's what is going on (and produces inefficiencies in the process.)

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...