Skip to content

Margus Roo –

If you're inventing and pioneering, you have to be willing to be misunderstood for long periods of time

  • Cloudbreak Autoscale fix
  • Endast

Process mining – Assumed Bias

Posted on August 19, 2015 - August 19, 2015 by margusja

Screen Shot 2015-08-19 at 11.42.36

 

Posted in ProM

Alpha algorithm

Posted on August 19, 2015 - August 19, 2015 by margusja

Screen Shot 2015-08-19 at 09.41.53 Screen Shot 2015-08-19 at 09.45.18

Screen Shot 2015-08-19 at 09.49.17

 

Screen Shot 2015-08-19 at 11.35.17

 

Screen Shot 2015-08-19 at 11.39.00

Screen Shot 2015-08-19 at 10.09.08

Screen Shot 2015-08-19 at 10.10.19

Screen Shot 2015-08-19 at 10.14.26 Screen Shot 2015-08-19 at 10.15.01

Screen Shot 2015-08-19 at 10.18.52

Screen Shot 2015-08-19 at 10.22.37

 

Posted in ProM

Confusion matrix

Posted on August 18, 2015 - August 19, 2015 by margusja

Screen Shot 2015-08-18 at 14.53.14

 

K – Total

P – All positive (TP + TN)

Screen Shot 2015-08-19 at 10.59.51

Screen Shot 2015-08-19 at 11.03.44

 

Posted in Machine Learning

K-means find clusters

Posted on August 18, 2015 by margusja

Initial set of two dimensional data and k=3. Set initial centroids randomly.

Screen Shot 2015-08-18 at 14.30.35

 

Find instances to closest centroids

Screen Shot 2015-08-18 at 14.31.33

Calculate new positions for centroids

Screen Shot 2015-08-18 at 14.35.33

Find again closest instances to centroids

Screen Shot 2015-08-18 at 14.36.42

Calculate again new position of centroids

Screen Shot 2015-08-18 at 14.38.49

Find again closest instances to centroids

Screen Shot 2015-08-18 at 14.40.09

Posted in Linux

association rules

Posted on August 18, 2015 - August 18, 2015 by margusja

Screen Shot 2015-08-18 at 14.07.09

 

Screen Shot 2015-08-18 at 14.08.35

Screen Shot 2015-08-18 at 14.11.50

Screen Shot 2015-08-18 at 14.13.35

 

Posted in Machine Learning

Real life versus designed process

Posted on August 18, 2015 by margusja

Screen Shot 2015-08-18 at 12.40.29

Posted in Machine Learning, Math

In a group of 365 people it is very unlikely that everyone has a different birthdate

Posted on August 12, 2015 by margusja

In a group of 365 people it is very unlikely that everyone has a different birthdate. The probability is 365!/365365 ? 1.454955 × 10?157 ? 0, i.e., incredibly small

Posted in Linux

Confusion matrix for two classes and some performance measures for classifiers

Posted on August 12, 2015 by margusja

Screen Shot 2015-08-12 at 13.07.02

Posted in Linux

Neural network

Posted on August 3, 2015 - August 5, 2015 by margusja

a = x1*w1 + x2*w2 + x3*w3 … xn*wn

Screen Shot 2015-08-03 at 15.47.44

 

Feedforward newwork

Screen Shot 2015-08-03 at 16.58.38

 

In case we have matrix 8X8 we need 64 input

Screen Shot 2015-08-03 at 17.06.55

Threshold

Screen Shot 2015-08-05 at 14.22.01

Bias

Screen Shot 2015-08-05 at 14.26.42

 

Learning

neuron_network_learning

 

Learning rate = 0.1
Expected output = 1
Actual output =  0
Error = 1

Weight Update:
wi = r E x + wi
w1 = 0.1 x 1 x 1 + w1
w2 = 0.1 x 1 x 1 + w2

New Weights:
w1 = 0.4
w2 = 0.4

neuron_network_learn2

 

Learning rate = 0.1
Expected output = 1
Actual output =  0
Error = 1

Weight Update:
wi = r E x + wi
w1 = 0.1 x 1 x 1 + w1
w2 = 0.1 x 1 x 1 + w2

New Weights:
w1 = 0.5
w2 = 0.5

neuron_network3

Learning rate = 0.1
Expected output = 1
Actual output =  1
Error = 0

No error,
training complete.

 

Lets implement it in Java
public class SimpleNN {

private double learning_rate;
private double expected_output;
private double actual_output;
private double error;

public static void main(String[] args) {

// initial
SimpleNN snn = new SimpleNN();
snn.learning_rate = 0.1;
snn.expected_output = 1;
snn.actual_output = 0;
snn.error = 1;

// inputs
int i1 = 1;
int i2 = 1;

// initial weigths
double w1 = 0.3;
double w2 = 0.3;

// loop untill we will get 0 error
while (true) {
System.out.println(“Error: “+ snn.error);
System.out.println(“w1: “+ w1);
System.out.println(“w2: “+ w2);
System.out.println(“actual output: “+ snn.actual_output);
w1 = snn.learning_rate * (snn.expected_output – snn.actual_output) * i1 + w1;
w2 = snn.learning_rate * (snn.expected_output – snn.actual_output) * i2 + w2;
snn.actual_output = w1 + w2;
if (snn.actual_output >= 0.99)
break;
}
System.out.println(“Final weights w1: “+ w1 + ” and w2: “+ w2);

}

}

Run it:
Error: 1.0
w1: 0.3
w2: 0.3
actual output: 0.0
Error: 1.0
w1: 0.4
w2: 0.4
actual output: 0.8
Error: 1.0
w1: 0.42000000000000004
w2: 0.42000000000000004
actual output: 0.8400000000000001
Error: 1.0
w1: 0.43600000000000005
w2: 0.43600000000000005
actual output: 0.8720000000000001
Error: 1.0
w1: 0.44880000000000003
w2: 0.44880000000000003
actual output: 0.8976000000000001
Error: 1.0
w1: 0.45904
w2: 0.45904
actual output: 0.91808
Error: 1.0
w1: 0.467232
w2: 0.467232
actual output: 0.934464
Error: 1.0
w1: 0.4737856
w2: 0.4737856
actual output: 0.9475712
Error: 1.0
w1: 0.47902848
w2: 0.47902848
actual output: 0.95805696
Error: 1.0
w1: 0.48322278399999996
w2: 0.48322278399999996
actual output: 0.9664455679999999
Error: 1.0
w1: 0.4865782272
w2: 0.4865782272
actual output: 0.9731564544
Error: 1.0
w1: 0.48926258176
w2: 0.48926258176
actual output: 0.97852516352
Error: 1.0
w1: 0.491410065408
w2: 0.491410065408
actual output: 0.982820130816
Error: 1.0
w1: 0.4931280523264
w2: 0.4931280523264
actual output: 0.9862561046528
Error: 1.0
w1: 0.49450244186112
w2: 0.49450244186112
actual output: 0.98900488372224
Final weights w1: 0.495601953488896 and w2: 0.495601953488896

 

So…Christopher CAN LEARN!!!

 

Neural network with AND logic.

0 & 0 = 0

0 & 1 = 0

1 & 0 = 0

1 & 1 = 1

Using neural network you can use only one perceptron

Screen Shot 2015-08-03 at 15.47.44

In paper calculated weights and AND logic

NN_AND

So we need to find weights fulfil conditions:

0 * w1 + 0 * w2 <= 1

0 * w1 + 1 * w2 <= 1

1 * w1 + 0 * w2 <= 1

1 * w1 + 1 * w2 = 1

Java code to implement this:
public class AndNeuronNet {

private double learning_rate;

private double threshold;

public static void main(String[] args) {

// initial
AndNeuronNet snn = new AndNeuronNet();
snn.learning_rate = 0.1;
snn.threshold = 1;

// AND function Training data
int[][][] trainingData = {
{{0, 0}, {0}},
{{0, 1}, {0}},
{{1, 0}, {0}},
{{1, 1}, {1}},
};

// Init weights
double[] weights = {0.0, 0.0};

snn.threshold = 1;

// loop untill we will get 0 error
while (true) {
int errorCount = 0;

for(int i=0; i < trainingData.length; i++){
System.out.println(“Starting weights: ” + Arrays.toString(weights));
System.out.println(“Inputs: ” + Arrays.toString(trainingData[i][0]));
// Calculate weighted input
double weightedSum = 0;
for(int ii=0; ii < trainingData[i][0].length; ii++) {
weightedSum += trainingData[i][0][ii] * weights[ii];
}
System.out.println(“Weightedsum in training: “+ weightedSum);

// Calculate output
int output = 0;
if(snn.threshold <= weightedSum){
output = 1;
}

System.out.println(“Target output: ” + trainingData[i][1][0] + “, ” + “Actual Output: ” + output);

// Calculate error
int error = trainingData[i][1][0] – output;
System.out.println(“Error: “+error);

// Increase error count for incorrect output
if(error != 0){
errorCount++;
}

// Update weights
for(int ii=0; ii < trainingData[i][0].length; ii++) {
weights[ii] += snn.learning_rate * error * trainingData[i][0][ii];
}

System.out.println(“New weights: ” + Arrays.toString(weights));
System.out.println();
}

System.out.println(“ErrorCount: “+ errorCount);
// If there are no errors, stop
if(errorCount == 0){
System.out.println(“Final weights: ” + Arrays.toString(weights));
System.exit(0);
}

}

}

}

Compile and run:

Starting weights: [0.0, 0.0]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.0, 0.0]

Starting weights: [0.0, 0.0]

Inputs: [0, 1]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.0, 0.0]

Starting weights: [0.0, 0.0]

Inputs: [1, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.0, 0.0]

Starting weights: [0.0, 0.0]

Inputs: [1, 1]

Weightedsum in training: 0.0

Target output: 1, Actual Output: 0

Error: 1

New weights: [0.1, 0.1]

ErrorCount: 1

Starting weights: [0.1, 0.1]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.1, 0.1]

Starting weights: [0.1, 0.1]

Inputs: [0, 1]

Weightedsum in training: 0.1

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.1, 0.1]

Starting weights: [0.1, 0.1]

Inputs: [1, 0]

Weightedsum in training: 0.1

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.1, 0.1]

Starting weights: [0.1, 0.1]

Inputs: [1, 1]

Weightedsum in training: 0.2

Target output: 1, Actual Output: 0

Error: 1

New weights: [0.2, 0.2]

ErrorCount: 1

Starting weights: [0.2, 0.2]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.2, 0.2]

Starting weights: [0.2, 0.2]

Inputs: [0, 1]

Weightedsum in training: 0.2

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.2, 0.2]

Starting weights: [0.2, 0.2]

Inputs: [1, 0]

Weightedsum in training: 0.2

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.2, 0.2]

Starting weights: [0.2, 0.2]

Inputs: [1, 1]

Weightedsum in training: 0.4

Target output: 1, Actual Output: 0

Error: 1

New weights: [0.30000000000000004, 0.30000000000000004]

ErrorCount: 1

Starting weights: [0.30000000000000004, 0.30000000000000004]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.30000000000000004, 0.30000000000000004]

Starting weights: [0.30000000000000004, 0.30000000000000004]

Inputs: [0, 1]

Weightedsum in training: 0.30000000000000004

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.30000000000000004, 0.30000000000000004]

Starting weights: [0.30000000000000004, 0.30000000000000004]

Inputs: [1, 0]

Weightedsum in training: 0.30000000000000004

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.30000000000000004, 0.30000000000000004]

Starting weights: [0.30000000000000004, 0.30000000000000004]

Inputs: [1, 1]

Weightedsum in training: 0.6000000000000001

Target output: 1, Actual Output: 0

Error: 1

New weights: [0.4, 0.4]

ErrorCount: 1

Starting weights: [0.4, 0.4]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.4, 0.4]

Starting weights: [0.4, 0.4]

Inputs: [0, 1]

Weightedsum in training: 0.4

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.4, 0.4]

Starting weights: [0.4, 0.4]

Inputs: [1, 0]

Weightedsum in training: 0.4

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.4, 0.4]

Starting weights: [0.4, 0.4]

Inputs: [1, 1]

Weightedsum in training: 0.8

Target output: 1, Actual Output: 0

Error: 1

New weights: [0.5, 0.5]

ErrorCount: 1

Starting weights: [0.5, 0.5]

Inputs: [0, 0]

Weightedsum in training: 0.0

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.5, 0.5]

Starting weights: [0.5, 0.5]

Inputs: [0, 1]

Weightedsum in training: 0.5

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.5, 0.5]

Starting weights: [0.5, 0.5]

Inputs: [1, 0]

Weightedsum in training: 0.5

Target output: 0, Actual Output: 0

Error: 0

New weights: [0.5, 0.5]

Starting weights: [0.5, 0.5]

Inputs: [1, 1]

Weightedsum in training: 1.0

Target output: 1, Actual Output: 1

Error: 0

New weights: [0.5, 0.5]

ErrorCount: 0

Final weights: [0.5, 0.5]

So what we did?

Basically we iterated over the training set until we found good weights to fulfil condition for all records in training set:

if  (threshold <= input1[i] * weight1 + input2[i] * weight2) then (if 0 == target[1] done else error and loop again) else (if 1 == target[i] done else error and loop again)

With one perceptor we can solve boolean problems. In picture below you can see weights. Neural network can find them.

Screen Shot 2015-08-05 at 13.50.45

Posted in Machine Learning, Math

Fuzzy

Posted on July 30, 2015 - July 30, 2015 by margusja

We start by defining the input temperature states using “membership functions”:
Fuzzy

 

With this scheme, the input variable’s state no longer jumps abruptly from one state to the next. Instead, as the temperature changes, it loses value in one membership function while gaining value in the next. In other words, its ranking in the category of cold decreases as it becomes more highly ranked in the warmer category.

At any sampled timeframe, the “truth value” of the brake temperature will almost always be in some degree part of two membership functions: i.e.: ‘0.6 nominal and 0.4 warm’, or ‘0.7 nominal and 0.3 cool’, and so on.

 

In practice, the controller accepts the inputs and maps them into their membership functions and truth values. These mappings are then fed into the rules. If the rule specifies an AND relationship between the mappings of the two input variables, as the examples above do, the minimum of the two is used as the combined truth value; if an OR is specified, the maximum is used. The appropriate output state is selected and assigned a membership value at the truth level of the premise. The truth values are then defuzzified. For an example, assume the temperature is in the “cool” state, and the pressure is in the “low” and “ok” states. The pressure values ensure that only rules 2 and 3 fire:

 

Fuzzy_control_-_Rule_2_evaluation Fuzzy_control_-_Rule_3_evaluation

 

https://en.wikipedia.org/wiki/Fuzzy_control_system

Posted in Elektroonika, IT

Posts navigation

Older posts
Newer posts

The Master

Categories

  • Apache
  • Apple
  • Assembler
  • Audi
  • BigData
  • BMW
  • C
  • Elektroonika
  • Fun
  • Hadoop
  • help
  • Infotehnoloogia koolis
  • IOT
  • IT
  • IT eetilised
  • Java
  • Langevarjundus
  • Lapsed
  • lastekodu
  • Linux
  • M-401
  • Mac
  • Machine Learning
  • Matemaatika
  • Math
  • MSP430
  • Muusika
  • neo4j
  • openCL
  • Õpetaja identiteet ja tegevusvõimekus
  • oracle
  • PHP
  • PostgreSql
  • ProM
  • R
  • Turvalisus
  • Varia
  • Windows
Proudly powered by WordPress | Theme: micro, developed by DevriX.