Tag Archives: Redis

Amazon AWS – Installing Redis on EBS

In this step-by-step guide i’ll show you how to install Redis on AWS (Amazon Linux AMI).

I’ll assume you’re performing steps below as a su (sudo -s).

  1. First thing you need is to have following tools installed:
    > gcc
    > gcc-c++
    > make

    yum -y install gcc gcc-c++ make
    

    _

  2. Download Redis:
    cd /usr/local/src
    wget http://download.redis.io/releases/redis-2.8.12.tar.gz
    tar xzf redis-2.8.12.tar.gz
    rm -f redis-2.8.12.tar.gz
    

    _

  3. Build it:
    cd redis-2.8.12
    make distclean
    make
    

    _

  4. Create following directories and copy binaries:
    mkdir /etc/redis /var/redis
    cp src/redis-server src/redis-cli /usr/local/bin
    

    _

  5. Copy Redis template configuration file into /etc/redis/ (using Redis port number instance as its name (according to best practices mentioned on Redis site)):
    cp redis.conf /etc/redis/6379.conf
    

    _

  6. Create directory inside /var/redis that will act as working/data directory for this Redis instance:
    mkdir /var/redis/6379
    

    _

  7. Edit Redis config file to make necessary changes:
    nano /etc/redis/6379.conf
    

    _

  8. Make following changes to 6379.conf
    > Set daemonize to yes (by default it is set to no).
    > Set pidfile to /var/run/redis.pid
    > Set preferred loglevel
    > Set logfile to /var/log/redis_6379.log
    > Set dir to /var/redis/6379

    _

  9. Don’t copy the standard Redis init script from utils directory into /etc/init.d (as it’s not Amazon Linux AMI/chkconfig compliant), instead download the following:
    wget https://raw.githubusercontent.com/saxenap/install-redis-amazon-linux-centos/master/redis-server
    

    _

  10. Move and chmod downloaded redis init script:
    mv redis-server /etc/init.d
    chmod 755 /etc/init.d/redis-server
    

    _

  11. Edit redis-server init script and change redis conf file name as following:
    > REDIS_CONF_FILE=”/etc/redis/6379.conf”

    nano /etc/init.d/redis-server
    

    _

  12. Auto-enable Redis instance:
    chkconfig --add redis-server
    chkconfig --level 345 redis-server on
    

    _

  13. Start Redis:
    service redis-server start
    

    _

  14. (optional) Add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf (otherwise background save may fail under low memory condition – according to info on Redis site):
    > vm.overcommit_memory = 1

    nano /etc/sysctl.conf
    

    _

  15. Activate the new sysctl change:
    sysctl vm.overcommit_memory=1
    

    _

  16. Try pinging your instance with redis-cli:
    /usr/local/bin/redis-cli ping
    

    _

  17. Do few tests with redis-cli and check that the dump file is correctly stored into /var/redis/6379/ (you should find a file called dump.rdb):
    /usr/local/bin/redis-cli
    >set testkey testval
    >get testkey
    >del testkey
    >exit
    

    _

  18. Check that your Redis instance is correctly logging in the log file:
    cat /var/log/redis_6379.log
    

    _

 

And that would be basically it. Cheers.

 

Redis Replication

Continuing on my series of introductory posts on Redis DB, today i’ll address the subject of replication.

 

Definition:

  • Replication is a method by which other servers receive a continuously updated copy of the data as it’s being written, so that the replicas can service read queries.

 

Basic info (redis.io):

  • Redis uses asynchronous replication. Starting with Redis 2.8 there is however a periodic (one time every second) acknowledge of the replication stream processed by slaves.
  • A master can have multiple slaves.
  • Slaves are able to accept other slaves connections. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a graph-like structure.
  • Redis replication is non-blocking on the master side, this means that the master will continue to serve queries when one or more slaves perform the first synchronization.
  • Replication is non blocking on the slave side: while the slave is performing the first synchronization it can reply to queries using the old version of the data set, assuming you configured Redis to do so in redis.conf. Otherwise you can configure Redis slaves to send clients an error if the link with the master is down. However there is a moment where the old dataset must be deleted and the new one must be loaded by the slave where it will block incoming connections.
  • Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example, heavy SORT operations can be offloaded to slaves), or simply for data redundancy.
  • It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf to avoid saving (just comment all the “save” directives), then connect a slave configured to save from time to time.

 

How Redis replication works (redis.io):

  • If you set up a slave, upon connection it sends a SYNC command. And it doesn’t matter if it’s the first time it has connected or if it’s a re-connection.
  • The master then starts background saving, and collects all new commands received that will modify the dataset. When the background saving is complete, the master transfers the database file to the slave, which saves it on disk, and then loads it into memory. The master will then send to the slave all accumulated commands, and all new commands received from clients that will modify the dataset. This is done as a stream of commands and is in the same format of the Redis protocol itself.
  • You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the SYNC command. You’ll see a bulk transfer and then every command received by the master will be re-issued in the telnet session.
  • Slaves are able to automatically reconnect when the master <-> slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests, it performs a single background save in order to serve all of them.
  • When a master and a slave reconnects after the link went down, a full re-sync is always performed. However starting with Redis 2.8, a partial re-synchronization is also possible.

 

In order to configure the replication, all you have to do is to add the line below (or issue the same as a CLI command from slave) to the redis.conf file of the slave.

  • SLAVEOF <master_ip> <master_port>             (ex. SLAVEOF 127.0.0.1 6379)

 

to tune the replication process you can play with following options in the redis.conf file:

  • requirepass <password> – Require clients to issue AUTH <PASSWORD> before processing any other commands. This might be useful in environments in which you do not trust (eg. don’t run your own servers) others with access to the host running redis-server
  • masterauth <master-password> – If the master is password protected (using the “requirepass” configuration directive above) it is possible to tell the slave to authenticate before starting the replication synchronization process, otherwise the master will refuse the slave request
  • slave-serve-stale-data <yes|no> – When a slave loses its connection with the master, or when the replication is still in progress, the slave can act in two different ways:
    • still reply to client requests, possibly with out-of-date data (the default behavior if the switch is set to “yes”)
    • or reply with an error “SYNC with master in progress” to all the kind of commands, except for to INFO and SLAVEOF (otherwise)
  • slave-read-only <yes|no> – You can configure a slave instance to accept writes or not. Writing against a slave instance may be useful to store some ephemeral data (because data written on a slave will be easily deleted after re-sync with the master anyway), but may also cause problems if clients are writing to it because of a misconfiguration
  • repl-ping-slave-period <seconds> – Slaves send PINGs to server in a predefined interval. It’s possible to change this interval with the repl_ping_slave_period option from CLI. The default value is 10 seconds.
  • repl-timeout <seconds> – This option sets a timeout for both Bulk transfer I/O timeout and master data or ping response timeout. The default value is 60 seconds. It is important to make sure that this value is greater than the value specified for repl-ping-slave-period otherwise a timeout will be detected every time there is low traffic between the master and the slave.
  • repl-disable-tcp-nodelay <yes|no> – Controls whether to disable TCP_NODELAY on the slave socket after SYNC. If you select “yes” Redis will use a smaller number of TCP packets and less bandwidth to send data to slaves. But this can add a delay for the data to appear on the slave side, up to 40 milliseconds with Linux kernels using a default configuration. If you select “no” the delay for data to appear on the slave side will be reduced but more bandwidth will be used for replication. Default value of “no” is an optimization for low latency, but in very high traffic conditions or when the master and slaves are many hops away, turning this to “yes” may be a good idea.
  • slave-priority <integer> – The slave priority is an integer number published by Redis in the INFO output. It is used by Redis Sentinel in order to select a slave to promote into a master if the master is no longer working correctly. A slave with a low priority number is considered better for promotion, so for instance if there are three slaves with priority 10, 100, 25 Sentinel will pick the one with priority 10, that is the lowest. However a special priority of 0 marks the slave as not able to perform the role of master, so a slave with priority of 0 will never be selected by Redis Sentinel for promotion.

 

Allowing writes only with N attached replicas (redis.io):

  • Starting with Redis 2.8 it is possible to configure a Redis master in order to accept write queries only if at least N slaves are currently connected to the master, in order to improve data safety.
  • However because Redis uses asynchronous replication it is not possible to ensure the write actually received a given write, so there is always a window for data loss.
  • This is how the feature works:
    • Redis slaves ping the master every second, acknowledging the amount of replication stream processed.
    • Redis masters will remember the last time it received a ping from every slave.
    • The user can configure a minimum number of slaves that have a lag not greater than a maximum number of seconds.
    • If there are at least N slaves, with a lag less than M seconds, then the write will be accepted.
  • There are two configuration parameters for this feature:
    • min-slaves-to-write <number of slaves>
    • min-slaves-max-lag <number of seconds>

 

Have a nice weekend!

Redis Persistence

In my today’s post i’d like to touch on Redis persistence mechanisms.

 

What we can choose from are basically two options (or the combination of those):

  • The RDB persistence – which performs point-in-time snapshots of your dataset at specified intervals.
  • The AOF (append-only file) persistence – which logs every write operation received by the server, that can later be “played” again at server startup, reconstructing the original dataset (commands are logged using the same format as the Redis protocol itself).

 

Both of those options are controlled by two different groups of configuration settings in the redis.conf file:

  • RDB persistence:
    • save <seconds> <changes> – saving the DB on disk – the command will save the DB if both the given number of seconds and the given number of write operations against the DB occurred. You can have multiple save configurations “stacked” one after another, handling saves in different “seconds/changes” scenarios or you can disable saving at all commenting out all the “save” lines.
    • stop-writes-on-bgsave-error <yes|no> – by default Redis will stop accepting writes if RDB snapshots are enabled (at least one save point) and the latest background save failed. This will make the user aware (in an hard way) that data is not persisting on disk properly. If the background saving process will start working again Redis will automatically allow writes again.
    • rdbcompression <yes|no> – compression of string objects using LZF when dump .rdb databases.
    • rdbchecksum <yes|no> – since version 5 of RDB a CRC64 checksum is placed at the end of the file which makes the format more resistant to corruption but there is a performance hit to pay (around 10%) when saving and loading RDB files.
    • dbfilename <name> – The filename (default dump.rdb) where to dump the DB.
    • dir <path> – The working directory (default value is ./) where the DB will be written. The Append Only File will also be created inside this directory
  • AOF persistence:
    • appendonly <yes|no> – controls whether AOF mode should be turned on. By default Redis asynchronously

      dumps the dataset on disk (RDB Persistence) which is a mode good enough in many applications, but an issue with the Redis process or 

      a power outage may result into a few minutes of writes lost (depending on 

      the configured save points). AOF provides much better durability. Using the default data 

      fsync policy Redis can lose just one second of writes in a dramatic event like a server power outage, 

      or a single write if something wrong with the Redis process itself happens, but the operating system 

      is still running correctly. AOF and RDB persistence can be enabled at the same and they play very nicely

      together. If the AOF is enabled on startup Redis will load the AOF, that is the file with the better 

      durability guarantees.

    • appendfilename <name> – The name of the append only file (default: “appendonly.aof”)
    • appendfsync <mode> – mode in which fsync should operate. The fsync() call tells the Operating System to actually write data on disk 

      instead to wait for more data in the output buffer. Some OS will really flush 

      data on disk, some other OS will just try to do it ASAP. Redis supports three different modes:

      • <no>: don’t fsync, just let the OS flush the data when it wants. Faster.
      • <always>: fsync after every write to the append only log . Slow, Safest.
      • <everysec>: fsync only one time every second. Compromise. (default)
    • no-appendfsync-on-rewrite <yes|no> – when the AOF fsync policy is set to always or everysec, and a 

      background saving process (a background save or AOF log background rewriting) is 

      performing a lot of I/O against the disk, in some Linux configurations R

      edis may block too long on the fsync() call. In order to mitigate this problem it’s possible to use this option 

      which will prevent fsync() from being called in the main process while a BGSAVE or BGREWRITEAOF is in progress. 

      In practical terms, this means that it is 

      possible to lose up to 30 seconds of log in the worst scenario (with the default 

      Linux settings).

    • auto-aof-rewrite-percentage <percentage> and auto-aof-rewrite-min-size <size> – are both related to automatic rewrite of the append only file. Redis is able to automatically rewrite the log file (implicitly calling BGREWRITEAOF) when the AOF log size grows by the specified percentage. This is how it works: Redis remembers the size of the AOF file after the latest rewrite (if no rewrite has happened since the restart, the size of the AOF at startup is used). This base size is compared to the current size. If the current size is bigger than the specified percentage, the rewrite is triggered. Also you need to specify a minimal size for the AOF file to be rewritten, this is useful to avoid rewriting the AOF file even if the percentage increase is reached but it is still pretty small. Specify a percentage of zero in order to disable the automatic AOF rewrite feature.

 

Advantages and disadvantages of both methods (redis.io):

  • RDB advantages
    • RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters.
    • RDB is very good for disaster recovery, being a single compact file can be transfered to far data centers, or on Amazon S3 (possibly encrypted).
    • RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent instance will never perform disk I/O or alike.
    • RDB allows faster restarts with big datasets compared to AOF.
  • RDB disadvantages
    • RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working (for example after a power outage). You can configure different save points where an RDB is produced (for instance after at least five minutes and 100 writes against the data set, but you can have multiple save points). However you’ll usually create an RDB snapshot every five minutes or more, so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data.
    • RDB needs to fork() often in order to persist on disk using a child process. Fork() can be time consuming if the dataset is big, and may result in Redis to stop serving clients for some millisecond or even for one second if the dataset is very big and the CPU performance not great. AOF also needs to fork() but you can tune how often you want to rewrite your logs without any trade-off on durability.
  • AOF advantages
    • Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second write performances are still great (fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress.) but you can only lose one second worth of writes.
    • The AOF log is an append only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with an half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily.
    • Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one.
    • AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you flushed everything for an error using a FLUSHALL command, if no rewrite of the log was performed in the meantime you can still save your data set just stopping the server, removing the latest command, and restarting Redis again.
  • AOF disadvantages
    • AOF files are usually bigger than the equivalent RDB files for the same dataset.
    • AOF can be slower then RDB depending on the exact fsync policy. In general with fsync set to every second performances are still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load.
    • Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust.

 

The general advice from Redis team is that you should use both persistence methods if you want a degree of data safety comparable to what PostgreSQL can provide you.

 

 

Take care!

 

Resources: