SimpleDB Performance : 5 Steps to Achieving High Write Throughput

I was recently tasked with fork-lifting ~1 billion rows from Oracle into SimpleDB. I completed this forklift in November 2009 after many attempts. To make this as efficient as possible, I worked closely with Amazon’s SimpleDB folks to troubleshoot performance problems and create new APIs. I’d like to share some recommendations and observations.

Although I have covered these recommendations in depth in a previous post (i.e. link above), I’d like present a more succinct list of recommendations and observations here to maximize knowledge transfer.


The architecture consists of a daemon (i.e. IR, for Item Replicator) that reads records out of Oracle and puts them into multiple SimpleDB domains. I’ve actually shown a second IR process that reads data out of SimpleDB for insertion into Oracle, but you should ignore it for the purpose of this discussion. When I refer to IR in this article, I mean the process replicating from Oracle to SimpleDB.


  1. Shard your data
    1. You can achieve much higher data access rates to multiple domains than to a single domain. Hence, rather than using a single domain, use multiple. This is because write traffic acts as if throttled or rate-limited at a domain level.
  2. Use slow-ramp up for writing
    1. AWS (SimpleDB) doesn’t like bursty writes and will often respond by throttling IR. When your data uploader starts up, have it slowly increase the write rate
  3. Use some sort of back-off strategy
    1. I’ve adopted Amazon recommendation for retry intervals (i.e. 250ms, 500ms, 1s, 2s). Essentially, wait 250 milliseconds on first failure before retrying, 500 milliseconds on second failure before retrying, and so on. After the 3rd retry attempt, stick to 2 second idle intervals.
  4. Use BatchPutAttributes instead of the singleton PutAttributes
    1. This will get you an order-of-magnitude improvement in throughput
  5. Set replace=false on puts 
    1. This is the default. If you know that you are strictly always inserting unique records, puts with replace=false will run much faster than replace=true
    2. Also, since this is the default, Amazon recommends that users not set replace=false at all

Feel free to follow me on Twitter (@r39132).

  1. esaiz reblogged this from rooksfury
  2. rooksfury posted this
blog comments powered by Disqus
About Me
A blog describing my work in building websites that hundreds of millions of people visit. I'm Chief Architect at ClipMine, an innovative video mining and search company. I previously held technical and leadership roles at LinkedIn, Netflix, Etsy, eBay & Siebel Systems. In addition to the nerdy stuff, I've included some stunning photography for your pure enjoyment!
Tumblelogs I follow: