Author: margusja
Marek ja Kärt
Find class from jars
[margusja@sandbox storm_to_file]$ find libs/hbase-0.96.2-hadoop2/lib/ -name ‘*.jar’ -exec grep -Hls HBaseConfiguration {} \;
Hadoop and how to read via webhdfs
Create a local demo file – demo.txt and put some content into it
Upload to hdfs:
margusja@IRack:~/hadoop-2.4.0$ bin/hdfs dfs -put demo.txt hdfs://127.0.0.1/user/margusja/
Open and read via webhdfs:
margusja@IRack:~/hadoop-2.4.0$ curl -i -L “http://localhost:50070/webhdfs/v1/user/margusja/demo.txt?op=OPEN”
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 25 Apr 2014 09:34:46 GMT
Date: Fri, 25 Apr 2014 09:34:46 GMT
Pragma: no-cache
Expires: Fri, 25 Apr 2014 09:34:46 GMT
Date: Fri, 25 Apr 2014 09:34:46 GMT
Pragma: no-cache
Location: http://sandbox.hortonworks.com:50075/webhdfs/v1/user/margusja/demo.txt?op=OPEN&namenoderpcaddress=sandbox.hortonworks.com:8020&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26)
HTTP/1.1 200 OK
Cache-Control: no-cache
Expires: Fri, 25 Apr 2014 09:34:46 GMT
Date: Fri, 25 Apr 2014 09:34:46 GMT
Pragma: no-cache
Expires: Fri, 25 Apr 2014 09:34:46 GMT
Date: Fri, 25 Apr 2014 09:34:46 GMT
Pragma: no-cache
Content-Length: 93
Content-Type: application/octet-stream
Server: Jetty(6.1.26)
Hello, this is a demo file by Margusja@deciderlab.com
This is demo how to use hadoop webhdfs
margusja@IRack:~/hadoop-2.4.0$
Apache-storm
Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate
Apache-zookeeper http://zookeeper.apache.org/doc/trunk/index.html
zookeeper on mõeldud hoidma teenuste seadistusi ja staatusi. Näiteks antud juhul on zookeeper serverites talletatud informatsioon, millised storm’i workerid on olemas.
Zookeeper teenus võib olla jaotunud eraldi serverite vahel, mis tagab kõrge veakindluse
Zookeeper hoiab seadistusi hierarhias
Näiteks minu testkeskkonnas on üks storm supervisor e worker ja hetkel on üks topoloogia, see kajastub zookeeperis:
[root@sandbox ~]# /usr/lib/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181
Connecting to 127.0.0.1:2181
…
[zk: 127.0.0.1:2181(CONNECTED) 1] ls /storm
[workerbeats, errors, supervisors, storms, assignments]
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /storm/storms
[engineMessages5-2-1398208863]
[zk: 127.0.0.1:2181(CONNECTED) 3]
Zookeeper võimaldab stormi workereid dünaamiliselt juurde lisada. Storm master e nimbus oskab zookeeper serverist saadud info kohaselt workereid kasutada. Näiteks, kui mõni worker mingil põhjusel ei ole enam kättesaadav, siis zookeeper saab sellest teada, kuna heardbeate enam ei tule ja nimbus organiseerib voogude teekonnad ringi tekitades kadunud workeri asemel uue, eeldusel, et on kuhugile tekitada ehk on veel vabu supervisoreid.
Storm
Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.
Nimbus
On master topoloogias, kes kordineerib, kasutades zookeeper-klastris olevat informatsiooni, storm-supervisorite töid ehk tagab voogude läbimise topoloogiast.
Storm-supervisor ehk worker
Spout(id) ja/või Bolt(id), kes kuuluvad mingisse topoloogiasse. Võivad asuda ühes füüsilises serveris või jaotatud erinevate füüsiliste serverite vahel. Zookeeperi abil annavad nimbusele teada oma olemasolust.
Storm-supervisor versus supervisor (http://supervisord.org/)
Etteruttavalt selgitan, et antud juhul on kasutusel kaks supervisor teenust, mis on erinevad ja mida on vaja lahti seletada.
storm-supervisor – strom worker
supervisor – Process Control System.
On kasutusel, tagamaks, et teenused – nimbus, zookeeper, storm_supervisor (worker) oleksid kiirelt taastatud, kui mõni neist peaks mingil põhjusel seiskuma.
Näide:
Hetkel on minu testkeskkonnas supervisor (mitte storm-supervisor) kontrolli all vajalikud storm teenused
[root@sandbox ~]# supervisorctl
storm-supervisor RUNNING pid 3483, uptime 2:14:55
storm_nimbus RUNNING pid 3765, uptime 1:44:23
storm_ui RUNNING pid 3672, uptime 2:13:09
zookeeper RUNNING pid 3484, uptime 2:14:55
supervisor>
Peatades näiteks storm_nimbus protsessi 3765
[root@sandbox ~]# kill -9 3765
supervisord logis:
2014-04-22 17:53:20,884 INFO exited: storm_nimbus (terminated by SIGKILL; not expected)
2014-04-22 17:53:20,884 INFO received SIGCLD indicating a child quit
2014-04-22 17:53:21,894 INFO spawned: ‘storm_nimbus’ with pid 4604
2014-04-22 17:53:22,898 INFO success: storm_nimbus entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Kontrollime supervusord statust
supervisor> status
storm-supervisor RUNNING pid 3483, uptime 2:30:50
storm_nimbus RUNNING pid 4604, uptime 0:00:38
storm_ui RUNNING pid 3672, uptime 2:29:04
zookeeper RUNNING pid 3484, uptime 2:30:50
On näha, et just on uus protsess käivitatud.
Toodangusüsteemides on soovitatav jaotada storm komponendid nii, et nimbus, ui ja üks zookeeper server on ühes masinas ja teistes asuvad zookeeper server ja storm-supervisor. Komplekte zookeeper-server ja storm-supervusor võib dünaamiliselt hiljem lisada.
Vahemärkusena, et tegelikult ei pea storm-supervisor ja zookeeper ühes füüsilises serveris asuma. Piisab, kui storm-supervisor teab, kus asub zookeeper server, et sinna oma staatus teatada.
Kui mingil põhjusel peaks üks storm-supervusor kättesaamatuks muutuma, siis nimbus saab sellest teada ja organiseerib topoloogia niimoodi, et voog oleks täielik.
Kui mingil põhjusel peaks muutuma mittekättesaadavaks nimbus, siis topoloogia on terviklik ja vood täätavad edasi.
Kui mingil põhjusel peaks muutuma korraga mittekättesaadavaks nimbus ja mõni hetkel topoloogias aktiivselt osalev storm-supervisor, siis tekib esimene reaalne probleem. Samas ka siin ei kao voos liikuvad andmed vaid õige seadistuse puhul iga topoloogias olev Spout registreerib voos olevate sõnumite mittekohalejõudmise ja kui nüüd taastatakse nimbus ja/või storm-supervisor, siis Spout saadab sõnumi uuesti.
Kujutame ette, et meil on allpool toodud topoloogia
Kõik Bolt’d ja Spout’d asuvad eraldi masinates ehk on srorm-supervusor + zookeeper komplektid, siis juhul, kui peaks tekkima selline olukord
Siis nimbus saab sellest teada, sest zookeeperisse enam heartbeate ei tule ja nimbus üritab leida zookeeperi kaudu mõnda vaba serverit, kus on storm-supervisor.
Kui nüüd on olemas zookeeperi kaudu nimbusele teada mõni vaba storm-supervisor, siis topoloogia taastatakse. Kui mõnii sõnum ei jõudnud vahepeal kohale, kuna topoloogia ei olnud täielik, siis Spout on sellest teadlik ja saadab sõnumi uuesti.
jätkub…
Pentaho and Saiku
Some screenshot may help me in future
Add a new datasource
configure new mysql connection
Add a new datasource wizard
Select your external DB
Select fact table as Mondrian star structure requires (Basically you will build Mondrian schema)
Map facts table to dimension tables
Save it
If you want edit mode then choose “Customize model now”
Go and create a new Saiku Analytics
Set columns and rows
Now you can enjoy your nice graphs
Display and nodes
Raadiomoodul (RFM12B) datasheet 868MHz
Andurid saadavad serial andmeid vastuvõtjasse.
Andmete saatjad. Kasutavad raadiomoodulit RFM12B 868MHz TX seades.
Ühte võrku saab panna suhtlema omavahel 256 raadiot.
Andurite voolutarve on 0.3 mA (IDLE) Optimeeritud patareitoitele. Näiteks 3×1.5 jadamisi 2200mAh on võimalik üle ühe kuu toita.
Andmete saatmise hetkel 0.7mA.
Andureid vajavad loogilist HI 3.3V
Näidikute loogikaosa vajab samuti 3.3V. Minul kasutatavad LCDd vajavad loogilist HI 5V ja taustavalguseks ka 5V
Vastuvõtjad töötavad samuti samade raadiomoodulitega RFM12B 868MHz RX seades.
Lisaks on võimalus vastuvõtja poolt saadud andmed internetis olevasse serverisse saata.
Lets build a calculator
This is the first prototype of simple calculator that can add 4-bits.
Green and blue leds are input registers I can switch with a buttons in the middle. Output register are red leds and the most left one red is carry out.
The chip I use is 74HCT283
Pull down resistors are 4.7K (VCC is 5V)
At the moment I can only add.
In future I’ll implement subtract multiply and divide functions
mahout and recommenditembased
Lets imagine we have data about how user rated our products they have bought.
userID – productID – rate.
So with mahout recommenditembased class we can recommend new products to our users. Here is simple command line example how can we do this.
lets create a file where we are going to put our present data about users, products and rates.
vim intro.csv
1,101,5.0
1,102,3.0
1,103,2.5
2,101,2.0
2,102,2.5
2,103,5.0
2,104,2.0
3,101,2.5
3,104,4.0
3,105,4.5
3,107,5.0
4,101,5.0
4,103,3.0
4,104,4.5
4,106,4.0
5,101,4.0
5,102,3.0
5,103,2.0
5,104,4.0
5,105,3.5
Put it into hadoop dfs:
hdfs dfs -moveFromLocal intro.csv input/
We need output directory in hadoop dfs:
[speech@h14 ~]$ hdfs dfs -mkdir output
Now we can run recommend command:
[speech@h14 ~]$ mahout/bin/mahout recommenditembased –input input/intro.csv –output output/recommendation -s SIMILARITY_PEARSON_CORRELATION
Our result will be in hadoop dfs output/recommendation
[speech@h14 ~]$ hdfs dfs -cat output/recommendation/part-r-00000
1 [104:3.9258494]
3 [102:3.2698717]
4 [102:4.7433763]
But if we do not have rates. We have only users and items they have bought. We can still use mahout recommenditembased class.
speech@h14 ~]$ vim boolean.csv
1,101
1,102
1,103
2,101
2,102
2,103
2,104
3,101
3,104
3,105
3,107
4,101
4,103
4,104
4,106
5,101
5,102
5,103
5,104
5,105
[speech@h14 ~]$ hdfs dfs -moveFromLocal boolean.cvs input/
[speech@h14 ~]$ mahout/bin/mahout recommenditembased –input /user/speech/input/boolean.csv –output output/boolean -b -s SIMILARITY_LOGLIKELIHOOD
[speech@h14 ~]$ hdfs dfs -cat /user/speech/output/boolean/part-r-00000
1 [104:1.0,105:1.0]
2 [106:1.0,105:1.0]
3 [103:1.0,102:1.0]
4 [105:1.0,102:1.0]
5 [106:1.0,107:1.0]
[speech@h14 ~]$
Audio (Estonian) to text with Kaldi
https://github.com/alumae/kaldi-offline-transcriber
CentOS release 6.5 (Final) Linux vm38 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@h14 ~]# yum groupinstall “Development Tools”
[root@h14 ~]# yum install zlib-devel
[root@h14 ~]# yum install java-1.7.0-openjdk.x86_64
[root@vm38 ~]# yum install ffmpeg
[root@vm38 ~]# yum install sox
[root@vm38 ~]# yum install atlas
[root@vm38 ~]# yum install atlas-devel
[root@vm38 ~]# su – margusja
[margusja@vm38 ~]$ mkdir kaldi
[margusja@vm38 ~]$ cd kaldi/
[margusja@vm38 ~]$ mkdir tool
[margusja@vm38 ~]$ cd tools /
[margusja@vm38 ~]$ svn co svn://svn.code.sf.net/p/kaldi/code/trunk kaldi-trunk Hetkel annab alltoodud probleemi ID-2
[margusja@vm38 tools]$ svn co -r 2720 svn://svn.code.sf.net/p/kaldi/code/trunk kaldi-trunk
svn co -r 2720 svn://svn.code.sf.net/p/kaldi/code/trunk kaldi-trunk // 4xxxx series build
[margusja@vm38 ~]$ cd kaldi-trunk/
[margusja@vm38 ~]$ cd tools/
Downloaded http://sourceforge.net/projects/math-atlas/files/Stable/3.10.0/atlas3.10.0.tar.bz2 and build it – huge work!
[margusja@vm38 ~]$ make – Kuna on vana co siis, Makefile sees olevad viited välistele ressursidele on muutunud, mida tuleb uuendada
[margusja@vm38 tools]$ cd ../src/
[margusja@vm38 ~]$ ./configure
[margusja@vm38 ~]$ make depend
[margusja@vm38 ~]$ make test (optional)
[margusja@vm38 ~]$ make valgrind (optional – memory tests can contain errors – takes long time)
[margusja@vm38 ~]$ make
[root@h14 ~]# wget http://mirror-fpt-telecom.fpt.net/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@h14 ~]# rpm -i epel-release-6-8.noarch.rpm
[root@vm38 ~]# yum install python-pip
[root@vm38 ~]$ CPPFLAGS=”-I/home/margusja/kaldi/tools/kaldi-trunk/tools/openfst/include -L/home/margusja/kaldi/tools/kaldi-trunk/tools/openfst/lib” pip install pyfst
[margusja@vm38 ~]$ cd /home/margusja/kaldi/tools/
[margusja@vm38 tools]$ git clone https://github.com/alumae/kaldi-offline-transcriber.git
[margusja@vm38 tools]$ cd kaldi-offline-transcriber/
[margusja@vm38 kaldi-offline-transcriber]$ curl http://www.phon.ioc.ee/~tanela/kaldi-offline-transcriber-data.tgz | tar xvz
[margusja@vm38 kaldi-offline-transcriber]$ vim Makefile.options // Inside it add a line KALDI_ROOT=/home/margusja/kaldi/tools/kaldi-trunk – whatever where is your path
[margusja@vm38 kaldi-offline-transcriber]$ make .init
…
Problem ID-1:
sox formats: no handler for file extension `mp3′
Solution:
Convert mp3 to ogg
…
Problem ID-2:
steps/decode_nnet_cpu.sh –num-threads 1 –skip-scoring true –cmd “$decode_cmd” –nj 1 \
–transform-dir build/trans/test3/tri3b_mmi_pruned/decode \
build/fst/tri3b/graph_prunedlm build/trans/test3 `dirname build/trans/test3/nnet5c1_pruned/decode/log`
steps/decode_nnet_cpu.sh –num-threads 1 –skip-scoring true –cmd run.pl –nj 1 –transform-dir build/trans/test3/tri3b_mmi_pruned/decode build/fst/tri3b/graph_prunedlm build/trans/test3 build/trans/test3/nnet5c1_pruned/decode
steps/decode_nnet_cpu.sh: feature type is lda
steps/decode_nnet_cpu.sh: using transforms from build/trans/test3/tri3b_mmi_pruned/decode
run.pl: job failed, log is in build/trans/test3/nnet5c1_pruned/decode/log/decode.1.log
make: *** [build/trans/test3/nnet5c1_pruned/decode/log] Error 1
Solution:
[margusja@vm38 tools]$ svn co -r 2720 svn://svn.code.sf.net/p/kaldi/code/trunk kaldi-trunk
…
Problem ID-3
make /build/output/[file].txt annab
EFFECT OPTIONS (effopts): effect dependent; see –help-effect
sox: unrecognized option `–norm’
sox: SoX v14.2.0
Failed: invalid option
Solution hetkel Makefile seest eemaldada –norm võti sox käsult.
…
Problem ID-4
Decoding done.
(cd build/trans/test2/nnet5c1_pruned; ln -s ../../../fst/tri3b/graph_prunedlm graph)
rm -rf build/trans/test2/nnet5c1_pruned_rescored_main
mkdir -p build/trans/test2/nnet5c1_pruned_rescored_main
(cd build/trans/test2/nnet5c1_pruned_rescored_main; for f in ../../../fst/nnet5c1/*; do ln -s $f; done)
local/lmrescore_lowmem.sh –cmd “$decode_cmd” –mode 1 build/fst/data/prunedlm build/fst/data/mainlm \
build/trans/test2 build/trans/test2/nnet5c1_pruned/decode build/trans/test2/nnet5c1_pruned_rescored_main/decode || exit 1;
local/lmrescore_lowmem.sh –cmd run.pl –mode 1 build/fst/data/prunedlm build/fst/data/mainlm build/trans/test2 build/trans/test2/nnet5c1_pruned/decode build/trans/test2/nnet5c1_pruned_rescored_main/decode
run.pl: job failed, log is in build/trans/test2/nnet5c1_pruned_rescored_main/decode/log/rescorelm.JOB.log
queue.pl: probably you forgot to put JOB=1:$nj in your script.
make: *** [build/trans/test2/nnet5c1_pruned_rescored_main/decode/log] Error 1
local/lmrescore_lowmem.sh –cmd utils/run.pl –mode 1 build/fst/data/prunedlm build/fst/data/mainlm build/trans/test2 build/trans/test2/nnet5c1_pruned/decode build/trans/test2/nnet5c1_pruned_rescored_main/decode
run.pl: job failed, log is in build/trans/test2/nnet5c1_pruned_rescored_main/decode/log/rescorelm.JOB.log
queue.pl: probably you forgot to put JOB=1:$nj in your script.
…
Problem ID-5:
/usr/bin/ld: skipping incompatible /usr/lib/libz.so when searching for -lz
Solution:
[root@h14 ~]# rpm -qif /usr/lib/libz.so
Name : zlib-devel Relocations: (not relocatable)
Version : 1.2.3 Vendor: CentOS
Release : 29.el6 Build Date: Fri 22 Feb 2013 01:01:21 AM EET
Install Date: Fri 14 Mar 2014 10:21:49 AM EET Build Host: c6b9.bsys.dev.centos.org
Group : Development/Libraries Source RPM: zlib-1.2.3-29.el6.src.rpm
Size : 117494 License: zlib and Boost
Signature : RSA/SHA1, Sat 23 Feb 2013 07:53:47 PM EET, Key ID 0946fca2c105b9de
Packager : CentOS BuildSystem <http://bugs.centos.org>
URL : http://www.gzip.org/zlib/
Summary : Header files and libraries for Zlib development
Description :
The zlib-devel package contains the header files and libraries needed
to develop programs that use the zlib compression and decompression
library.
[root@h14 ~]# yum install zlib-devel