R – Četnost podruhé

Mějme následující data:

A potřebujeme zjistit absolutní a relativní četnost výskytu žen a mužů, rozdělenou ještě podle hodnoty v type .

Opět převedeme na data frame

A máme absolutní četnost. Teď vypočítáme relativní četnost.

Což znamená: pro každý subset z framu tbl.frame , rozdělený podle proměnné Var2 , proveď transform , a vypočti relativní četnost podle frekvence (abs. četnosti) a součtu hodnot frekvence.

R – četnost a graf

Dostal jsem pověření z nejvyšších míst vyrobit statistické zhodnocení dotazníků. A protože bytostně nemám rád Excel, našel jsem R a zkouším.

Udělal jsem data do CSV souboru a nahrál do R (RStudio)

Zkusíme vyrobit četnost vzdělání (edu) a potom ji ještě rozdělit podle pohlaví (gender).

Máme tabulku, ale potřebujeme z ní frame

Přidáme relativní četnost (mean)

A teď na graf. Používáme ggplot2.

Rplot1

Fajn. Ale chceme graf otočit

Rplot2

Teď ještě smazat popisek osy x a přepsat osu y.

Rplot3

A finálně přidáme hodnoty frekvence do jednotlivých sloupců.

Rplot4

A teď trochu komplexněji. Ještě to rozdělíme na muže a ženy..

a přidáme graf

Rplot6

A co když budeme chtít jeden bar, ale rozdělený podle hodnot?

A graf

Rplot7

 

STM32F4Discovery + eLua + OSX

I fixed few bugs when compiling elua from git on osx – clone my branch clone official repo here.

create .bin file

create openocd config file for stm32f4 board:

and upload to the board:

Now you can connect to the board using /dev/tty.usbmodem621, 115200/8/n/1. You should get this prompt:

enjoy :)

nodemcu – simple webserver

I’ve modified and updated this source:

nodemcu + el capitan

To flash

  • upgrade firmware

eye killed?

Ever seen

If so, solution is quite simple. Run  rm ~/.eye -rf and remove eye folder. Then re-run eye load  and it should all work :)

Rubymotion – new bug found

I’m using this motion-addressbook in my project. Reported http://hipbyte.myjetbrains.com/youtrack/issue/RM-630

fixed in 2.37


= RubyMotion 2.37 =

* Fixed a regression where Dispatch.once did not work correctly when it was
invoked in some points.
* Fixed a bug where the compiler would crash with an assertion message
`[BUG] Object: SubtypeUntil ...' when compiling certain Ruby files.
* Fixed a bug in the compiler when compiling for ARM64 where certain types
would still be emitted with a 32-bit architecture in mind.
* Fixed a bug in the compiler when compiling for ARM64 where certain structs
would not be properly available (such as NSDecimal).

AWS S3 IAM user policy setup for bucket

First of all, setup your AWS account – and login into AWS Management Console. Then create any bucket you want.

Next step is to add new IAM user – go to https://console.aws.amazon.com/iam/home?#users, select Users and click Create new user button.

iam_create

Click Create and copy/paste or download your access keys.

iam_access

Then close window and click to summary tab – you’ll need to copy ARN notation of this user

iam_summary

 

s3cmd

To be able to sync some folders and/or use s3cmd commandline tool, you need to setup some more policies. Click Permission tab and then Attach User Policy. Then choose Policy Generator and follow these steps:

manage_user_policySet permissions as shown in following image – we need ListAllMyBuckets for :::* and ListBucket and PutObject for the one, specified bucket.

user_policy_editor

 

Click Continue and save new policy

set_permission

You can always edit the policy using Manage Policy. You should create this kind of policy file:

That’s all. Lets move to bucket policy.

Bucket policy

Now you can open Policy Generator – http://awspolicygen.s3.amazonaws.com/policygen.html and start adding new items.

First of all, select your policy type – S3

policy_select_type

and then fill new policy – we need two policies – one for all users to read objects (GetObject) and second for your new user, to be able to upload, delete and get object as well.

public_policy_form

and click Add Statement. The list below the form should look like

public_policy_list

So, add your user the same way. As principal add your ARN notation (which you copied before) and add Actions as shown below.

policy_user

All set. You need to click Generate Policy button and copy freshly generated JSON policy to your clipboard

Last step is to add this policy to the bucket. Go back to your S3 console (https://console.aws.amazon.com/s3/home), click loupe icon next to bucket name and open Permission section. Click on Edit Bucket Policy and paste the generated policy into that window. Save and you’re done.

bucket_permission

 

You’ll need API keys for use with Paperclip or any other S3 storage engine, so keep them safe.

PS: when you plan to use s3 as storage for your static files, avoid using underscore in your bucket name – as it does not comply with valid FQDN. ‘sample_bucket’ was used only as an example, in the real word you should use ‘sample-bucket’.

ATtiny13 – Hello World! :)

Finally managed to get into AVR programming. Using OSX, which I found as the worst platform for doing any kind of embed programming :( Sad. Anyway, there’s my first ATtiny schematics and code. I’m using AVR Dragon to flash code into the MCU.

tiny13-spi

The circuit is quite simple. I just confirm SPI flashing works and I’m able to turn the LED on. So here’s some code (using avr-gcc to compile and avrdure to upload). Create new project using avr-project and add this to main.c file:

Now we can compile the firmware and upload to our MCU.

No errors. Great. Connect AVR Dragon and upload our new firmware:

tiny13-spi-foto

For those, interested in .hex file – this is how it looks like:

and I made some commented assembler output (base is taken from avr-objdump -S main.elf).

 

Optimizing fluentd

We’re currently using (for one part of our infrastructure) logging into elasticsearch. We have fluentd collectors and kibana interface for viewing and searching through the logs. fluentd This is how it works. Logs are sent to fluentd forwarder and then over the network to fluentd collector, which pushes all the logs to elasticsearch. As we have plenty of logs, we need to incorporate some buffering – on both sides – using buffer_file statement in the fluentd config. Here is a part of our fluentd config from forwarder

and the same for the collector

So. For the forwarder, we’re using buffer with max 4096 8MB chunks = 32GB of buffer space. Forwarder is flushing every 10secs. For collector, we use bigger chunks, as elasticsearch is capable to handle it – but not using default 256MB chunks due to memory limitations. Flushing period is longer – and should be – recommended value is 5minutes. We can keep up to 64Gigs of buffer data.

What happens if one of the fluentd dies. Some data will be probably lost, when unsaved to buffer. But. When there’s connection lost or collector fluentd isn’t running, all logs, collected by forwarder, are stored into the buffer – and sent later. Which is great. The same when ES is down for some reason, collector node is still receiving data and is able to continue sending into ES after full recovery.

PS: don’t forget to make some tweaks to the system itself, like raise the limit for max files opened and some tcp tunning.