Ethereum JSON_RPC eth_getStorageAt

Create a simple contract:


contract Storage {
    uint pos0;
    mapping(address => uint) pos1;
    
    function Storage() {
        pos0 = 1234;
        pos1[msg.sender] = 5678;
    }
}

submit it and get back address. In example 0x9a5CdfCb1132dcbEca55b213372224D9bd0209c2

Now lets execute it under one account. In example 0xa2213890a81042692B4716025D6e98349b432349

Lets see how we can get what is in storage related with contract.

We can use JSON_RPC eth_getStorageAt or web3.eth.getStorageAt from geth command line. In this example I’ll use JSON_RPC eth_getStorageAt.

Contracts storage is basically key-value storage.

To get pos0 is simple:

margusja@IRack:~$ curl -X POST --data '{"jsonrpc":"2.0", "method": "eth_getStorageAt", "params": ["0x9a5CdfCb1132dcbEca55b213372224D9bd0209c2", "0x0", "latest"], "id": 1}' localhost:8545
{"jsonrpc":"2.0","id":1,"result":"0x00000000000000000000000000000000000000000000000000000000000004d2"}

hex value 4d2 to decimal is 1234.

TO get pos1 is more tricky. First we have to calculate index position using contract executor address and index. Go to geth commandline and:

var key = “000000000000000000000000a2213890a81042692B4716025D6e98349b432349″ + “0000000000000000000000000000000000000000000000000000000000000001”

We added zeros to get 64 bit value. Next in geth command line:

web3.sha3(key, {“encoding”: “hex”}) – it returns aadress: 0x790e4fae970c427bd6d93e3f64ba898c69fdead01d68e500efb6f3abc672d632

Now we can get value from storage:

margusja@IRack:~$ curl -X POST --data '{"jsonrpc":"2.0", "method": "eth_getStorageAt", "params": ["0x9a5CdfCb1132dcbEca55b213372224D9bd0209c2", "0x790e4fae970c427bd6d93e3f64ba898c69fdead01d68e500efb6f3abc672d632", "latest"], "id": 1}' localhost:8545
{"jsonrpc":"2.0","id":1,"result":"0x000000000000000000000000000000000000000000000000000000000000162e"}

hex value 162e to decimal is 5678

Source – https://github.com/ethereum/wiki/wiki/JSON-RPC#web3_clientversion

Fun with openCV #2

Pilt oma olemuselt on lihtsalt kogum teatud formaadis numbreid.

Näiteks allolev pilt (600×338) koosneb 202800 täpist (pixel), kus ühe elemendi väärtus on vahemikus 0…255 (gray scale)

 

 

 

 

 

 

 

 

 

 

 

Arvutis asub pildi info kujul [row1;row2;row3…row600] kus row koosneb 338’t elemendist.

Image =

[143, 138, 139, 139, 143, 140, 142, 142, 143, 141, 141, 143, 145, 145, 144, 143, 143, 149, 150, 147, 147, 150, 151, 151, 151, 151, 151, 152, 154, 154, 152, 149, 153, 151, 152, 154, 155, 154, 153, 154, 159, 158, 157, 157, 156, 156, 156, 156, 156, 157, 157, 154, 153, 154, 157, 159, 158, 155, 156, 157, 157, 157, 158, 158, 155, 157, 159, 159, 157, 156, 157, 160, 163, 159, 160, 162, 159, 159, 161, 159, 161, 163, 163, 164, 165, 166, 166, 165, 165, 167, 168, 167, 165, 163, 163, 164, 164, 162, 161, 161, 162, 163, 163, 162, 161, 164, 164, 163, 165, 170, 169, 166, 168, 168, 166, 167, 167, 166, 168, 166, 166, 163, 162, 165, 167, 168, 167, 167, 166, 167, 168, 168, 166, 166, 168, 170, 167, 166, 167, 148,  91,  57,  56, 143, 168, 169, 161,  78,  17,  42,  34,  35,  30,  24,  21,  22,  24,  23,  22,  23,  21,  28,  29,  27,  26,  27,  30,  28,  24,  27,  28,  26,  27,  29,  28,  25,  29,  27,  27,  27,  26,  25,  26,  25,  27,  20,  19,  23,  20,  23,  24,  28,  27,  31,  34,  34,  35,  34,  32,  31,  32,  27,  27,  29,  31,  30,  28,  25,  21,  23,  22,  27,  23,  21,  21,  23,  25,  27,  27,  23,  20,  21,  23,  23,  23,  27,  20,  22,  23,  18,  23,  24,  27,  16,  30,  40,  33,  38,  10,  61, 154, 122, 137, 145, 146, 130, 130, 133, 130, 125,  94,  86,  99, 108,  96,  98,  95, 105, 100,  82,  66,  62,  61,  61,  73,  79,  72,  66,  73,  77,  68,  57,  44,  47,  70,  87,  77,  59,  55,  63,  57,  55,  58,  46,  52,  57,  56,  57,  64,  62,  62,  82, 113, 117, 119, 127, 116, 114, 113, 111, 105,  49,  34,  50, 136, 156, 156, 163, 164, 160, 158, 153, 158, 164, 166, 163, 161, 162, 160, 158, 153, 150, 146, 139, 138, 133, 119, 114,  75,  17,  33,  30,  63,  67,  69,  72,  72,  73,  67,  65,  59, 144, 159, 156, 156, 156, 159, 147, 125….

52,  54,  57,  57,  55,  60,  86,  90,  98, 111, 115, 112, 100, 103, 106, 119, 141, 158, 159, 158, 157, 158, 159, 161, 164, 172, 178, 180, 177, 176, 181, 185, 184, 183, 177, 160, 140, 135, 134, 135, 145, 151, 149, 147, 142, 143, 144, 160, 179, 183, 173, 178, 186, 186, 187, 188, 189, 183, 178, 181, 182, 180, 181, 179, 176, 174, 172, 170, 171, 171, 170, 172, 169, 174, 173, 179, 181, 182, 187, 182, 174, 169, 166, 162, 161, 164, 166, 169, 172, 174, 176, 179, 180, 168, 157, 160, 165, 176, 174, 106,   0,  22,  19,  18,  20,  11,   4,   5,   6,   4,   3,   2,   2]

Kuna meil on võimalik iga elemendiga omi tehteid teha, siis teeme näiteks lihtsa tehte, kus me muudame kõik elemendid väärtusega 183 väärtuseks 255 (valge):

 

#include
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, const char * argv[]) {

Mat image;

image = imread( “image.jpg”, 0 );

int channels = image.channels();
int cols = image.cols;
int rows = image.rows;

cout << “Image = ” << endl << ” ” << image << endl << endl;

cout << “Channels = ” << endl << ” ” << channels << endl << endl;

cout << “Rows = ” << endl << ” ” << rows << endl << endl;
cout << “Cols = ” << endl << ” ” << cols << endl << endl;
cout << “Size = ” << endl << ” ” << image.total() << endl << endl;

for(int i = 0; i < image.rows; i++){
for(int j=0; j < image.cols; j++){
if (image.at(i,j) == 183) {
image.at(i,j) = 255;
}
}
}

// visualize image
namedWindow( “demo”, WINDOW_AUTOSIZE );
imshow( “Demo image”, image );
waitKey(0);

return 0;
}

 

Tulemuseks saame uue pildi:

OpenCV-3.2.0 Object tracking

 

Object tracking is much more faster than object detecting.
Source from – https://www.learnopencv.com/object-tracking-using-opencv-cpp-python/

OpenCV Feature Matching test

 

FOUND 7831 keypoints on first image
FOUND 2606 keypoints on second image
SURF run time: 1780.93 ms
Max distance: 0.500609
Min distance: 0.0160885
Calculating homography using 50 point pairs.

 

Source code – https://github.com/opencv/opencv_contrib/blob/master/modules/xfeatures2d/samples/surf_matcher.cpp

Apache Spark some hints

  • Stages – pipelined jobs RDD -> RDD -> Rdd (narrow)
  • Suffle – The transfer of data between stages  (wide)
  • Debug – to visualise how do we build RDD – input.toDebugString (input is RDD)
  • Cache expensive RDDs after shuffle
  • Use Accumulators (counters inside executors) to debug RDD’s – Values via UI
  • Pipeline as much as possible (rdd->map->filter) one stage
  • split into stages to reorganise RDDs
  • Avoid shuffle large amount of RDDs
  • Parditioneid 2xCores in cluster
  • Max – task should not take no longer than 100ms
  • Memory problem – dmesg oom-killer
  • Use build in aggregateByKey noy your own aggregation not groupBy
  • Filter as early you can
  • Use KyroSerializer
  • SSD disks YARN local dir (shuffle is faster)
  • USE High level API’s (DataFrame for core porcessing)
  • rdd.reduceByKey(func) is better than rdd.groupByKey() and reduce
  • Use data.join().explain()

    RDD.distinct – Shuffles!

  • Learning Spark (e-book)

 

scala> List( 1, 2, 4, 3 ).reduce( (x,y) => x + y )
res22: Int = 10

scala> List( 1, 2, 4, 3 ).fold(0)((x,y) => x+y)
res24: Int = 10

scala> List( 1, 2, 4, 3 ).fold(0)((x,y) => { if (x > y) x else y } )
res25: Int = 4

scala> List( 5, 2, 4, 3 ).reduce( (a,b) => { if (a > b) a else b } )
res29: Int = 5

 

Avoid duplicates during joins

https://docs.databricks.com/spark/latest/faq/join-two-dataframes-duplicated-column.html

Apache-Spark 2.x + Yarn – some errors and solutions

Problem:
2017-03-24 09:15:55,235 ERROR [dispatcher-event-loop-2] cluster.YarnScheduler: Lost executor 2 on bigdata38.webmedia.int: Container marked as failed: container_e50_1490337980512_0004_01_000003 on host: bigdata38.webmedia.int. Exit status: 52. Diagnostics: Exception from container-launch.
Container id: container_e50_1490337980512_0004_01_000003
Exit code: 52
Container exited with a non-zero exit code 52
The exit code 52 comes from org.apache.spark.util.SparkExitCode, and it is val OOM=52 – i.e. an OutOfMemoryError

Problem:

2017-03-24 09:33:49,251 WARN  [dispatcher-event-loop-4] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_e50_1490337980512_0006_01_000002 on host: bigdata33.webmedia.int. Exit status: -100. Diagnostics: Container released on a *lost* node

2017-03-24 09:33:46,427 WARN  nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(311)) – Directory /hadoop/yarn/local error, used space above threshold of 90.0%, removing from list of valid directories

2017-03-24 09:33:46,427 WARN  nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(311)) – Directory /hadoop/yarn/log error, used space above threshold of 90.0%, removing from list of valid directories

2017-03-24 09:33:46,427 INFO  nodemanager.LocalDirsHandlerService (LocalDirsHandlerService.java:logDiskStatus(373)) – Disk(s) failed: 1/1 local-dirs are bad: /hadoop/yarn/local; 1/1 log-dirs are bad: /hadoop/yarn/log

2017-03-24 09:33:46,428 ERROR nodemanager.LocalDirsHandlerService (LocalDirsHandlerService.java:updateDirsAfterTest(366)) – Most of the disks failed. 1/1 local-dirs are bad: /hadoop/yarn/local; 1/1 log-dirs are bad: /hadoop/yarn/log

 

Problem:

2017-03-24 09:40:45,618 WARN  [dispatcher-event-loop-9] scheduler.TaskSetManager: Lost task 53.0 in stage 2.2 (TID 440, bigdata38.webmedia.int): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Container marked as failed: container_e50_1490337980512_0006_01_000010 on host: bigdata38.webmedia.int. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143

Container exited with a non-zero exit code 143

The GC overhead limit means, GC has been running non-stop in quick succession but it was not able to recover much memory. Only reason for that is, either code has been poorly written and have alot of back reference(which is doubtful, as you are doing simple join), or memory capacity has reached.

May-be problem (if it takes long time – usually should be less than 50ms):

2017-03-24 11:46:41,488 INFO  recovery.NMLeveldbStateStoreService$LeveldbLogger (NMLeveldbStateStoreService.java:log(1032)) – Manual compaction at level-0 from (begin) .. (end); will stop at (end)

2017-03-24 11:46:41,489 INFO  recovery.NMLeveldbStateStoreService$LeveldbLogger (NMLeveldbStateStoreService.java:log(1032)) – Manual compaction at level-1 from (begin) .. (end); will stop at ‘NMTokens/appattempt_1490337980512_0011_000001’ @ 10303 : 1

2017-03-24 11:46:41,499 INFO  recovery.NMLeveldbStateStoreService$LeveldbLogger (NMLeveldbStateStoreService.java:log(1032)) – Manual compaction at level-1 from ‘NMTokens/appattempt_1490337980512_0011_000001’ @ 10303 : 1 .. (end); will stop at (end)

2017-03-24 11:46:41,500 INFO  recovery.NMLeveldbStateStoreService (NMLeveldbStateStoreService.java:run(1023)) – Full compaction cycle completed in 20 msec

yarn.resourcemanager.leveldb-state-store.compaction-interval-secs

yarn.timeline-service.leveldb-timeline-store.path

Problem:

ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM

Out-Of-Memory error

17/03/31 15:31:12 ERROR SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task launch worker-26,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded

Ilus pilt Krissuga

Harva saab endast nii kena pildi lasta teha. Patt oleks seda siis ainult endale hoida.