Optimizing fluentd

We’re currently using (for one part of our infrastructure) logging into elasticsearch. We have fluentd collectors and kibana interface for viewing and searching through the logs. fluentd This is how it works. Logs are sent to fluentd forwarder and then over the network to fluentd collector, which pushes all the logs to elasticsearch. As we have plenty of logs, we need to incorporate some buffering – on both sides – using buffer_file statement in the fluentd config. Here is a part of our fluentd config from forwarder

<match ***>
  type forward
  send_timeout 60s
  recover_wait 10s
  heartbeat_interval 1s
  phi_threshold 16
  hard_timeout 120s

  # buffer
  buffer_type file
  buffer_path /opt/fluentd/buffer/
  buffer_chunk_limit 8m
  buffer_queue_limit 4096
  flush_interval 10s
  retry_wait 20s

  # log to es
  <server>
    host 10.0.0.1
  </server>
  <secondary>
    type file
    path /opt/fluentd/failed/
  </secondary>
</match>

and the same for the collector

<source>
  type forward
  bind 10.0.0.1
</source>

<match log.**>
  type elasticsearch
  logstash_format true
  # elastic host
  host 10.0.0.3
  port 9200
  logstash_prefix log
  include_tag_key

  # buffering
  buffer_type file
  buffer_path /opt/fluentd/buffer/
  flush_interval 5m
  buffer_chunk_limit 16m
  buffer_queue_limit 4096
  retry_wait 15s
</match>

So. For the forwarder, we’re using buffer with max 4096 8MB chunks = 32GB of buffer space. Forwarder is flushing every 10secs. For collector, we use bigger chunks, as elasticsearch is capable to handle it – but not using default 256MB chunks due to memory limitations. Flushing period is longer – and should be – recommended value is 5minutes. We can keep up to 64Gigs of buffer data.

What happens if one of the fluentd dies. Some data will be probably lost, when unsaved to buffer. But. When there’s connection lost or collector fluentd isn’t running, all logs, collected by forwarder, are stored into the buffer – and sent later. Which is great. The same when ES is down for some reason, collector node is still receiving data and is able to continue sending into ES after full recovery.

PS: don’t forget to make some tweaks to the system itself, like raise the limit for max files opened and some tcp tunning.

capistrano3: run gem binary

I need to setup deploy task to run eye. Tried to deal with gem-wrappers, but had no success. As capistrano3 is using non-interactive ssh, without loading user’s environment (.profile, .bashrc etc), then command, which is not in PATH, it’s not workin.

So, after searching and reading capistrano (capistrano/rvm) source, then sshkit source, I got into this simple solution.

It’s not dependent on any other settings nor knowing what, where rvm is installed.

before change in deploy.rb (not working)

 INFO [57a66442] Running /usr/bin/env eye info on example.com
DEBUG [46484690] Command: cd /home/deploy/app/releases/20140130214109 && ( /usr/bin/env eye info )
DEBUG [46484690] 	/usr/bin/env: eye
DEBUG [46484690] 	: No such file or directory

after change in deploy.rb

 INFO [a2f9c75f] Running /usr/local/rvm/bin/rvm default do eye info on example.com
set :rvm_remap_bins, %w{eye}

namespace :settings do
  task :prefix_rake do
    fetch(:rvm_remap_bins).each do |cmd|
      SSHKit.config.command_map[cmd.to_sym] = "#{SSHKit.config.command_map[:gem].gsub(/gem$/,'')} #{cmd}"
    end
  end
end

after 'rvm:hook', 'settings:prefix_rake'

original code from capistrano is

# https://github.com/capistrano/rvm/blob/master/lib/capistrano/tasks/rvm.rake
SSHKit.config.command_map[:rvm] = "#{fetch(:rvm_path)}/bin/rvm"

rvm_prefix = "#{fetch(:rvm_path)}/bin/rvm #{fetch(:rvm_ruby_version)} do"
fetch(:rvm_map_bins).each do |command|
  SSHKit.config.command_map.prefix[command.to_sym].unshift(rvm_prefix)
end

...
set :rvm_map_bins, %w{gem rake ruby bundle}

redis sentinel with ruby (on rails)

In the last article I introduced how to install and use redis sentinel. As I’m using ruby, I need to use this new redis configuration with ruby (on rails).

For ruby on rails use redis-sentinel gem.

Then your redis initializer will look like

sentinels = [
  { host: '10.0.0.1', port: 17700 },
  { host: '10.0.0.2', port: 17700 },
  { host: '10.0.0.3', port: 17700 },
  { host: '10.0.0.4', port: 17700 }
]
# redis master name from sentinel.conf is 'master'
Redis.current = Redis.new(master_name: 'master', sentinels: sentinels)

You can use your redis then as usual.

When using sidekiq, configuration is pretty simple too

require 'sidekiq/web'
require 'redis-sentinel'
require 'sidetiq/web'

rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'

sentinels = [
  { host: '10.0.0.1', port: 17700 },
  { host: '10.0.0.2', port: 17700 },
  { host: '10.0.0.3', port: 17700 },
  { host: '10.0.0.4', port: 17700 }
]

redis_conn = proc { 
  Redis.current = Redis.new(master_name: 'master', sentinels: sentinels) 
}
redis = ConnectionPool.new(size: 10, &redis_conn)

Sidekiq.configure_server do |config|
  config.redis = redis
end

Sidekiq.configure_client do |config|
  config.redis = redis
end

You can test your configuration. Run rails console and test with

Loading production environment (Rails 3.2.16)
1.9.3p448 :001 > Redis.current.keys("*").count
 => 746
1.9.3p448 :002 > Redis.current
 => #<Redis client v3.0.5 for redis://10.0.0.2:6379/0>

if you see “127.0.0.1:6379”, something is probably wrong. Then try to set/get some key and check Redis.current once again.

rails + passenger + nginx maintenance mode

I need to add maintenance page to some rails app, running with passenger and nginx. Here’s some config and steps.

You just need to add static html file to app_root/public/maintenance.html – and I assume css files on /assets url.

so, here’s nginx config:

server {
  listen 80;
  server_name = www.example.com;
  root /home/deploy/www.example.com/public;
  passenger_enabled on;
  passenger_min_instances 5;

  set $maintenance 0;

  # is there maintenance file set?
  if (-f $document_root/../tmp/maintenance.txt) {
    set $maintenance 1;
  }

  # exclude /assets
  if ( $uri ~* ^/assets\/\.* ) {
    set $maintenance 0;
  }

  # in maintenance mode - send 503 status
  if ($maintenance = 1) {
    return 503;
  }

  # maintenance mode
  error_page 503 @503;

  # rewrite everything to maintenance.html
  location @503 {
    rewrite ^ /maintenance.html last;
    break;
  }
}

setting maintance mode is really simple – set app_root/tmp/maintenance.txt file – when escaping, just remove that file.

Need to split big SQL dump into separate databases?

I do. So I’ve written small bash script to do so. I think it’s self explanatory and does it’s job well :)

#!/bin/bash

if [[ $# -lt 1 ]]; then
  echo "Usage: $0 filename"
  exit 1
fi

FILE=$1
echo "Spliting "$FILE""

# default filename for sql headers
dbname=header

cat $FILE | while read line; do
 # echo $line
  if [[ $line =~ ^USE\ \`([^\`]*)\`\; ]]; then
    dbname=${BASH_REMATCH[1]}
    echo "Found db '$dbname'"
  fi
  echo $line >> $dbname.sql
done

I do NOT have CREATE DATABASE in sql file, thus set to USE..

Change image in UIImageView – RubyMotion

I needed to change image in my UIImageView, but simply setting new image using myImageView.setImage didn’t work. There’s simple workaround

@button = UIView.alloc.initWithFrame(frame)
buttonImage = UIImageView.alloc.initWithFrame(@button.bounds)
buttonImage.setTag(1) # set any number you want
buttonImage.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight
@button.addSubview buttonImage

and now set the image

self.button.viewWithTag(1).setImage(UIImage.imageNamed('answer_bar_white.png').resizableImageWithCapInsets(UIEdgeInsetsMake(18, 18, 18, 18)))

you can remove resizableImageWithCapInsets – I’m using square images to make UIImageView of any size.

Blink LED2 – LPC1769

LPCXpresso finally beaten. Installed new v5, imported all the CMSIS needed and setup brand new, my very first, MCU example code :) I’m using LPC1769 sample board from Embedded Artists.

#ifdef __USE_CMSIS
#include "LPC17xx.h"
#endif

#include <cr_section_macros.h>
#include <NXP/crp.h>

// Variable to store CRP value in. Will be placed automatically
// by the linker when "Enable Code Read Protect" selected.
// See crp.h header for more information
__CRP const unsigned int CRP_WORD = CRP_NO_CRP ;

int main(void) {
    // Set P0_22 to 00 - GPIO
    LPC_PINCON->PINSEL1 &= (~(3 << 12));
    // Set GPIO - P0_22 - to be output
    LPC_GPIO0->FIODIR |= (1 << 22);

    volatile static uint32_t i;
    while (1) {
        LPC_GPIO0->FIOSET = (1 << 22); // Turn LED2 on
        for (i = 0; i < 1000000; i++);
        LPC_GPIO0->FIOCLR = (1 << 22); // Turn LED2 off
        for (i = 0; i < 1000000; i++);
    }
    return 0;
}

and the same in assembler (from http://pygmy.utoh.org/riscy/cortex/led-lpc17xx.html)

;;; led-lpc17xx.asm
;;; written by Frank Sergeant
;;;    frank@pygmy.utoh.org
;;;    http://pygmy.utoh.org/riscy
;;; This program is in the public domain.  See http://pygmy.utoh.org/riscy/cortex/
;;; for notes about the program and how to assemble, link, and burn to flash.

;;; Blink the LED on the LPCXpresso LPC1769 ARM Cortex M3 board
;;; (or any LPC17xx ARM board with perhaps minor modifications).

;;; The LED on the Xpresso board is labeled LED2 and is just to the
;;; left of (inside of) J6-36.  It is connected to P0.22. The LED is
;;; on when P0.22 is high.
        
;;; Directives
        .thumb                  ; (same as saying '.code 16')
        .syntax unified

;;; Equates

        .equ LED_MASK, 0x00400000 ; i.e., bit 22
        
        .equ PINSEL0,  0x4002C000
        .equ PINSEL1,  0x4002C004

        .equ FIO0DIR,      0x2009C000 ; port direction, 0 (default) = input
        .equ FIO0MASK,     0x2009C010 ; port direction, 0 (default) = input
        .equ FIO0PIN,      0x2009C014
        .equ FIO0SET,      0x2009C018
        .equ FIO0CLR,      0x2009C01C
        
        .equ STACKINIT,   0x10004000

        .equ LEDDELAY,    300000

.section .text
        .org 0

;;; Vectors
vectors:
        .word STACKINIT         ; stack pointer value when stack is empty
        .word _start + 1        ; reset vector (manually adjust to odd for thumb)
        .word _nmi_handler + 1  ;
        .word _hard_fault  + 1  ;
        .word _memory_fault + 1 ;
        .word _bus_fault + 1    ;
        .word _usage_fault + 1  ;

_start:

        ldr r6, = PINSEL1
        ;; set P0.22 as a GPIO pin
        ;; P0.22 is controlled by bits 13:12 of PINSEL1
        ;; xxxx xxxx xxxx xxxx xx11 xxxx xxxx xxxx
        ;;    0    0    0    0    3    0    0    0

        ldr r0, [r6]
        bic r0, r0, # 0x00003000  ; clear bits 13:12 to force GPIO mode
        str r0, [r6]


        ;; set LED output pin (i.e. P0.22) as an output
        ldr r6, = FIO0DIR             ; for PORT0
        mov r0, # LED_MASK            ;  all inputs except for pin 22
        str r0, [r6]
        
        ;; r0 still contains LED_MASK 
        ldr r5, = FIO0CLR
        ldr r6, = FIO0SET

loop:
        str r0, [r5]            ; clear P0.22, turning off LED
        ldr r1, = LEDDELAY
delay1:
        subs r1, 1
        bne delay1

        str r0, [r6]            ; set P0.22, turning on LED
        ldr r1, = LEDDELAY
delay2:
        subs r1, 1
        bne delay2

        b loop                 ; continue forever

_dummy:                        ; if any int gets triggered, just hang in a loop
_nmi_handler:
_hard_fault:
_memory_fault:
_bus_fault:
_usage_fault:
        add r0, 1
        add r1, 1
        b _dummy

Download file using AFNetwork

set your Rakefile

  app.pods do
    pod 'AFNetworking'
    ...
  end
  def downloadFile(url, file, filesize)
    url = NSURL.URLWithString(url)
    request = NSURLRequest.requestWithURL(url)
    operation = AFHTTPRequestOperation.alloc.initWithRequest(request)
    operation.outputStream = NSOutputStream.outputStreamToFileAtPath(file, append: false)

    unless filesize.nil?
      SVProgressHUD.showProgress(0, status: "Downloading file")
      operation.setDownloadProgressBlock(lambda{|bytesRead, totalBytesRead, totalBytesExpected|
        SVProgressHUD.showProgress((((totalBytesRead/filesize.to_f)*100.0).round)/100.0, status: "Downloading file")
      })
    end

    operation.setCompletionBlockWithSuccess(lambda{|request, response|
      SVProgressHUD.dismiss unless filesize.nil?
    }, failure: lambda{|request, err|
      SVProgressHUD.dismiss unless filesize.nil?

      @alert = UIAlertView.alloc.initWithTitle('Error',
          message: 'Error when downloading data.', delegate: nil, cancelButtonTitle: 'OK',
          otherButtonTitles: nil)
      @alert.show
    })
    operation.start
  end

when downloading multiple files, use NSOperationQueue

# change donwloadFile

def downloadFile..
 ...
 operation # was operation.start
end
@queue = NSOperationQueue.alloc.init
@queue.name = "FileDownload"
@queue.maxConcurrentOperationCount = 1   # number of concurrent downloads

@queue.addOperation(downloadFile(url, file, filesize))