Activity10: Preprocessing of Handwritten Text

July 22, 2008 at 4:57 pm (Uncategorized)

The goal of this activity is to pre-process handwritten text for image processing. The original image I used is

with probability distribution

and 2d fft

We used a vertical mask in manipulating our fft to remove the horizontal lines.The processed fft is:

However, the lines were not removed but merely blurred out. This was not the case in activity 7.

The resulting binarized image is shown below.

The opening operation was used but without much success. We did not use the closing operation since we want to retain the holes of the letters, which is a property of the letters.

Code as follows:

chdir(“/home/jeric/Desktop/ap18645activity10”);
clear all;
getf(“imhist.sce”);
im=imread(“ac10.jpg”);
im=im2gray(im);

if max(im)<=255
im=round(im*255);
end

[val,num]=imhist(im);
h=scf(1);
plot(val, num);

ff=fftshift(fft2(im));
h=scf(2);
imshow(abs(ff), []); xset(“colormap”, hotcolormap(64));

[x,y]=size(im);
newff=ff(:,:);

filter=ones(x,y);
for j=y/22:y/2+2
for i=1:x/22
filter(i,j)=0;
end
for i=x/2+ 2 : x
filter(i,j)=0;
end
end

newff=filter.*ff;
h=scf(3);
imshow(abs(newff), []); xset(“colormap”, hotcolormap(64));

h=scf(4);
im2=abs(fft2(newff));
imshow(abs(fft2(newff)), []);

im=im2bw(im, 160/255);

h=scf(5);
imshow(im, []);

se=ones(2,2); //structuring element

im=erode(im, se); //opening
im=dilate(im, se);

im=erode(im, se); //opening
im=dilate(im, se);

[L,n]=bwlabel(im);

h=scf(6);
imshow(im, []); //preprocessed image

Rating: 6, because of the poor resulting image. The goal was not achieved.

Permalink 1 Comment

Activity 9: Binary Operations

July 17, 2008 at 11:26 am (Uncategorized) (, , , , , )

The goal of this activity is to find size of each “cell” in then image. We will be implementing the various techniques we have learned so far in solving the problem.

One of the relevant concepts that we will be implementing is the histogram thresholding to separate the background from the image. Another relevant concept is the opening and closing operations in morphological transformations. Opening is mathematically defined as

which simply means is the dilation of an erosion (see previous entry). Opening has the effect of removing small objects or noise. Closing on the other hand is defined as

which means the erosion of a dilation. It has the effect of removing small holes in the object.

The image above is divided into 9 images of size 256×256 to remove the burden of using too much memory.

My code for the activity is shown below.

getf(“imhist.sce”);
im=imread(‘circ01_01.jpg’);
stacksize(4e7);
pref=‘circ01_0’;

//create filter
se=ones(10,10);
se=mkfftfilter(se, ‘binary’, 4);

area=[];
counter=1

//scan images
for i=1:9
im=imread(strcat([pref,string(i),‘.jpg’]));
im=im2gray(im);
im=im2bw(im, 205/255);
im=erode(im, se); //opening
im=dilate(im, se);
im=dilate(im, se); //closing
im=erode(im, se);
[L,n]=bwlabel(im);
//scan regions
for j=1:n
area(counter)=length(find(L==j));
counter=counter+1;
end
end

scf(10);
histplot(length(area),area);
x=find(area<600 & area>450);
scf(11)
histplot(length(x), area(x));
a=area(x);
a=sum(a)/length(x) //area
y=stdev(area(x)) //error

The ‘pref’ variable is used to defined the prefix of the subimage, and concatenates it with a counter for loop with the 9 sub-images. We also used the ‘bwlabel’ command which looks for continuous regions in a binary image and labels them accordingly.

The histogram yields:

We can see that bwlabel has considered overlapping cells as one. However, we limit our interest to areas with less than 800 pixels  and greater than 400 pixels to increase the accuracy the calculation of the average of the cell. As can be seen below, there are regions that were labeled by bwlabel that was extremely small and some regions that were labeled were very large. Thus, setting a limit to the acceptable area is necessary.

The calculated areas is area=538.15 with std deviation of std=22.645. We are confident that this is indeed within the limits of the accepted area. To check the validity of our results, we took a subimage and calculated the area of the “cells”. The subimage that we took has completely separted cell, which ensures that the area we calculate is the area of a cell, and not the area of compounded/joint cells.

I gave myself a rating of 10 because i believe that i was able to do the activity right.

Collaborators: Cole, Julie.

Permalink Leave a Comment

Activity 8: Morphological Transforms

July 15, 2008 at 9:04 pm (Uncategorized)

Dilation and Erosion are morphological transform where we have a binary image (white/1 foreground and black/0 background). Dilation is mathematical defined as

where A is the image and B is called the structuring element where the dilation is based. In lay terms (or in more comprehensible terms), Is the set that involves all z’s which are translations of a reflected B that when intersected with A is not the empty set. This can be illustrated as

On the other hand, erosion is mathematically defined as

or in more comprehensible terms,  is  the set  of all  points z such  that  B  translated by z  is contained in A. The effect of erosion is to reduce the image by the shape of B. This can be illustrated as

(based on Dr Soriano’s lecture)

In this activity, we used different images (circle, square, hollow square, triangle and cross) and varying structuring elements shown here:

The results of the morphological transform are shown below. The code is

se1=ones(4,4); //square
se2=ones(2,4); //rectangle
se3=ones(4,2); //rectangle
se4=[0 0 1 0 0; 0 0 1 0 0; 1 1 1 1 1; 0 0 1 0 0; 0 0 1 0 0]; //cross

im=imread(‘cross.PNG’);
im=im(:,:,1);
er1=erode(im, se1, [1,1]);
er2=erode(im, se2, [1,1]);
er3=erode(im, se3, [1,1]);
er4=erode(im, se4, [3,3]);

di1=dilate(im, se1, [1,1]);
di2=dilate(im, se2, [1,1]);
di3=dilate(im, se3, [1,1]);
di4=dilate(im, se4, [3,3]);

scf(1);
subplot(2,2,1);
imshow(se1);
subplot(2,2,2);
imshow(se2);
subplot(2,2,3);
imshow(se3);
subplot(2,2,4);
imshow(se4);

scf(2);
subplot(2,2,1);
imshow(er1);
subplot(2,2,2);
imshow(er2);
subplot(2,2,3);
imshow(er3);
subplot(2,2,4);
imshow(er4);

scf(3);
subplot(2,2,1);
imshow(di1);
subplot(2,2,2);
imshow(di2);
subplot(2,2,3);
imshow(di3);
subplot(2,2,4);
imshow(di4);

Circle Dilation

Circle Erosion

Cross

Cross Dilate

Cross Erode

hollow

Hollow Dilate

Hollow Erode

Square

Square Dilate

Square Erode

Triangle

triangle dilate

triangle erode

I will give my self 10 pts here since the actual and predicted are in good correspondence.

Thanks to Julie and JC for the help. 🙂

Permalink Leave a Comment

Activity 7: Enhancement in the Frequency Domain

July 10, 2008 at 9:52 am (Uncategorized)

Varying the frequency of a sinusoid image has the effect of changing the peaks of the fft. Higher frequencies correspondes to fft peaks farther from (0,0) which is expeceted.

and their corresponding FT

Rotation of the image consequently rotates the FFT but on the other direction

And multiply sine and cosine results in peaks of coordinates which are the peak of the sine and cosine.

-oOo-

Fingerprint Enhancement

We now apply some filtering techniques to enhance this finger print sample.

clear all;
im=imread(“finger.jpg”);
im=im2gray(im);
im=im-mean(im);
ff=fft2(im);
h=scf(1);
imshow(abs(fftshift(ff)), []); xset(“colormap”, hotcolormap(64)); //show fft of image

//make exponential high pass filter

filter=mkfftfilter(im, ‘exp’, 30);
filter=fftshift(1-filter);

//perform enhanment
enhancedft=filter.*ff;
enhanced=real(fft2(enhancedft));
h=scf(2);
imshow(enhanced, []);

h=scf(3);
imshow(abs(fftshift(fft2(enhanced))), []); xset(“colormap”, hotcolormap(64));

Orignal Image

FT of the original image

Filter

Enhanced fingerprint

FT of the enhanced finger print.

-oOo-

Line Removal In Lunar Image

We have an image here of a images of the moon with strong vertical lines, and to enhance it, we have to remove the those lines.

Looking at the fft, we can see that this white lines are produced by the frequency components encircled (which has symmetry so you can see the corresponding intensity on the right-half) because of the lines’ periodicity. The goal then, is to make a filter to lessen the amplitude of the frequencies encircled*.

I made a mask that cuts off the horizontal line at which the encircled frequencies above lie. However, i retained the a portion near the center to retain some information. The mask looks like this:

The masked fft looks like this:

Finally, the enhanced image looks like this

and there are no more vertical lines present. 🙂

Code:

clear all;
stacksize(4e7);
im=imread(“luna.png”);
im=im2gray(im);
im=im; //-mean(im);
ff=fft2(im);
h=scf(1);
ff=fftshift(ff);
imshow(abs(ff), []); xset(“colormap”, hotcolormap(64));

[x,y]=size(im);
enhancedft=ff;
for i=(x+1)/2-2:(x+1)/2+2
enhancedft(i,1:(y+2)*11/24)=0;
enhancedft(i,(y+2)*13/24:y)=0; //immediate process the ft.
end
enhanced=abs(fft2(enhancedft));

h=scf(2);
imshow(abs(enhancedft), []); xset(“colormap”, hotcolormap(64));
h=scf(3);
imshow(enhanced ,[]);

*Note: The fft is bright because i removed the mean first of the image before performing fft, however, in the actual processing, i did not remove the mean of the image from the image matrix.

I have hints from: http://www.roborealm.com/help/FFT.php

Score: 10. Since i was able to do all of the task, and am proud of the removal of the vertical lines.:)

Collaborators: Julie, Cole, Leil, Benj

Permalink 1 Comment

Activity 6: Fourier Transform Model of Image

July 8, 2008 at 11:15 am (Uncategorized) (, , , , )

Activity 6-A: Familiarization with discrete FT

Doing the Fourier transform on a circle :

yields

However, the FFT algorithm inverts the quadrants of the FT plane and this has to be corrected using the fftshift() command. This process yields the correct FT of a circle whose analytical solution is the Airy circle, as shown in the figure below

Performing an FFT on th FFT of an image results to the original image, and is one of the properties of FFT. The image below indeed confirms this.

I = imread(‘circle.bmp’);
Igray = im2gray(I);
FIgray = fft2(Igray); //remember, FIgray is complex
h=scf(1);
imshow(abs((FIgray)), []); xset(“colormap”, hotcolormap(64));
h=scf(2);
imshow(abs(fftshift(FIgray)),[]); xset(“colormap”, hotcolormap(64));
h=scf(3);
imshow(abs(fft2(FIgray)), []); xset(“colormap”, hotcolormap(64));

-oOo-

Convolution

Convolution is process wherein the product of two functions f and g has the effect of transforming f and g.

Also, the convolution theorem states that the Fourier transform of a convolution is the product of the individual Fourier transform times a constant k.

In optics, this concept can be applied to lenses wherein we have an original image f and a lens g. Thus, with a finite lens, we can never reconstruct an image 100%.

We obtain the FT of the letter J and obtained

After fftshift, we had

And doing an fft again results to an inverted J.

We now convolve the image with a circular aperture and this results to:

We can see that the image has been blurred because of the finite size of the aperture. We then investigate the effect of varying the lens/aperture size with the resulting image. From our simulations, we saw that decreasing the size of the lens decreases the clarity of the image, as show in the following figure.

clear all;
r=imread(‘smallest.bmp’);
a=imread(‘letters.bmp’);
rgray = im2gray(r);
agray = im2gray(a);
Fr = fftshift(rgray);
Fa = fft2(agray);

FRA = Fr.*(Fa);
IRA = fft2(FRA); //inverse FFT
FImage = abs(IRA);
h=scf(1);
imshow(FImage, []);

-oOo-

Template Matching

We performed template matching of the images. From the text below, we located areas where they have a high degree of correlation and similarity with the template “A”. There is a high degree of correlation in places where there is high degree of similarity.

Places with high correlation is indicated by bright white spots. However, the image is upside-down and left-side right. Rotating the image of the correlation would correspond would yield a direct correspondence with the places in the original text.

words=imread(“word.bmp”);
a=imread(“A.bmp”);
words=im2gray(words);
a=im2gray(a);
ftwords=fft2(words);
fta=fft2(a);
correlation=ifft(fta.*conj(ftwords));
imshow(abs(fftshift(correlation)), []); xset(“colormap”, hotcolormap(256);

-oOo-

Edge Detection

We performed edge-detection using vertical, and horizontal filters using the command imcorrcoef(). This command is similar to template matching but instead uses a kernel template, and doesn’t care with the size of the template and the image.

let=imread(“L.bmp”);
let=im2gray(let);
hor = [-1 -1 -1; 2 2 2; -1 -1 -1];
vert= [-1  2 -1; -1 2 -1;-1  2 -1];
von = [-1 -1 -1; -1 8 -1; -1 -1 -1];
pattern=von //im2gray(vert);
c=imcorrcoef(let, pattern);
imshow(c);

We can see from the figure above that the correlation technique is helpful in finding the horizontal and vertical edges of the letter L. But we want to get the outline of an image, we can use the “von” pattern located above that locates only boundaries may it be horizontal or vertical.

Rating: 9.5 because i wasn’t reading some of the instruction although i was able to perform all the required tasks.

Acknowledgments: Julie, Cole, Benj and Lei.

Permalink Leave a Comment

Activity 5: Physics Measurements from Discrete Fourier Transform

July 3, 2008 at 9:24 am (Uncategorized) (, )

Discrete Fourier Transform is a mathematical construct where you determine the frequency component of a signal, very much like breaking down white light (signal) to its frequency components (colors). Fourier transform can be done numerically using

-oOo-

Performing FFT (code provided by Dr. Soriano)

T = 2;
N = 256;
dt = T/256;
t = [0:dt:(N-1)*dt];
f = 5;
y = sin(2*%pi*f*t);
f1 = scf(1); plot(t,y);

FY = fft(y);
F = 1/(2*dt);
df = 2*F/256;
f = [-(df*(N/2)):df:df*(N/2 -1)];

f2 = scf(2); plot(f, fftshift(abs(FY)));

-oOo-

The method of getting Fourier transform of images is somewhat similar to the process of getting the FT of temporal signals. In the case of images, we use the pixels as our discrete time, and we now perform spatial FT. Two dimensional FT can be done using

fft2=fft(fft(im).’).’

The algorithm above as mention in the Mathworks (Matlab) perform 1d FT along each column of im and them perform FT along the row of the result.

Sine wave (f=5, T=2)

We investigate the effect of varying the number of samples N with the Fourier Transfrom. From the plot below, we can see that by increasing the number of N, we widen the domain of our FT.


On the other hand, by having the total time fixed and increasing N (and effectively decreasing dt), we find that there is not much difference on the peak frequency, and the amplitude is half the value of N. This is because we are adding more signals components as compared to with lower N.

Permalink 1 Comment

Activity 4: Image Enhancement

June 26, 2008 at 11:27 am (Uncategorized) (, , )

This activity enhances images by stretching the histogram of pixel values and stretching it in such a way that the maximum is 255 and the minimum is 0. another requirement is that the image cumulative distribution function (cdf) of the pixel values becomes linear. i wrote here a simple imhist function that can be included in the directory of the source code, and calle using getf(“imhist.sce”).

here is the source code of the “imhist.sce” function i wrote:

function [val,num]=imhist(im)
val=[];
num=[];
counter=1;
for i=0:1:255
[x,y]=find(im==i); //finds where im==i
val(counter)=i;
num(counter)=length(x); //find how many pixels of im have value of i
counter=counter+1;
end
return val, num;

-oOo-

after obtaining the probability distribution function (pdf, which is a normalized histogram), we obtain the cumulative distribution function.

taken from: http://home.earthlink.net/~rogergoodman/XRay-2000-06-05-Modabber.jpg

The probability distribution function and the cumulative distribution function

image enhancement is obtained by obtaining the the values of each pixel and backprojecting. in a sense, we are simple mapping each pixel of the original image to a new array with the CDF as the mapping function.

here is the code, which implemented my own imhist function. the “imhist.sce” file is loaded within the same directory as the program below.

-oOo-

//Jeric Tugaff
//Image Enahancement

clear all;
getf(“imhist.sce”);
im=imread(“aram.jpg”); //opens a 24 bit image
im=im*255;
[val, num]=imhist(im);
[sx, sy]=size(im);
num=num/(sx*sy);
normval=val/max(val);
cumnum=cumsum(num)/max(cumsum(num));
h=scf(1);
plot(normval, num);
h=scf(2);
plot(normval, cumnum);

enhanced=[];
im=im/255;
for i=1:sx
for j=1:sy
enhanced(i,j)=cumnum(find(normval==im(i,j)));
end
end
imwrite(enhanced, “enhanced.jpg”);
enhanced=round(enhanced*255);

//hist ulit
[val, num]=imhist(enhanced);
num=num/(sx*sy);
normval=val/max(val);
cumnum=cumsum(num)/max(cumsum(num));
h=scf(3);
plot(normval, num);
h=scf(4);
plot(normval, cumnum);

-oOo-

Using any non-linear CDF

Using any non-liner CDF, we produced better images than backprojecting along a linear CDF. The flowchart of the pixel backprojecting is summarized by this image from Dr. Soriano’s lecture.

Enhanced Image using parabolic CDF

Actual and desired parabolic CDF

Again, there is good (not perfect) correspondence with the desired (smooth) and actual (jagged) CDF. Here is my code, which is quite slow but is more general especially when in cases where the inverse function of the CDF is difficult to solve analytically:

//Jeric Tugaff
//Image enhancement using parabolic CDF

clear all;
getf(“imhist.sce”);
im=imread(“aram.jpg”); //opens a 24 bit image
im=im*255;
[val, num]=imhist(im);
[sx, sy]=size(im);
num=num/(sx*sy);
normval=val/max(val);
cumnum=cumsum(num)/max(cumsum(num));
h=scf(0);
plot(val, num)

enhanced=[];
desired_cdfx=[0:1:255];
desired_cdfy=desired_cdfx.*desired_cdfx;
desired_cdfy=desired_cdfy/max(desired_cdfy);

for i=1:sx
for j=1:sy
step2(i, j)=cumnum(im(i,j));
end
end

enhanced=interp1(desired_cdfy, desired_cdfx, step2);
h=scf(1);
imshow(enhanced, [0 255]);
enhanced=round(enhanced);
//hist ulit
[val, num]=imhist(enhanced);
num=num/(sx*sy);
normval=val/max(val);
cumnum=cumsum(num)/max(cumsum(num));
h=scf(3);
plot(val, cumnum);

-oOo-

collaborators: julie, lei, cole

rating: 10, because i’ve been able to implement my own imhist function and the resulting cumulative disctribution is has good correspondence with the desired CDF. 🙂

Permalink Leave a Comment

Activity 3: Image Types and Basic Image Enhancement

June 24, 2008 at 10:30 am (Uncategorized) (, , , , )

Image Types

True Color Image

//www.flickr.com/photos/8700785@N08/643095209/

from: http://www.flickr.com/photos/8700785@N08/643095209/

FileName: truecolor.jpg
FileSize: 189687
Format: JPEG
Width: 500
Height: 493
Depth: 8
StorageType: truecolor
NumberOfColors: 0
ResolutionUnit: inch
XResolution: 72.000000
YResolution: 72.000000

Indexed Images.

indexed images parrot

from: http://en.wikipedia.org/wiki/Image:Adaptative_8bits_palette_sample_image.png

FileName: indexed.png
FileSize: 25576
Format: PNG
Width: 150
Height: 200
Depth: 8
StorageType: indexed
NumberOfColors: 256
ResolutionUnit: centimeter
XResolution: 72.000000
YResolution: 72.000000

Grayscale Image.

This is obtained by getting images from the web. However, this image is still 24 bits (which is not a property of grayscale images). To convert it to 8 bit (which is true grayscale), use the code:

im=imread(‘grayscale.jpg’);
im=im(:,:,1);
imwrite(im(:,:), ‘gs.jpg’);

//blog.paranoidferret.com/files/Tutorials/CSharp/Grayscale/bw_flower.jpg

from: http://blog.paranoidferret.com/files/Tutorials/CSharp/Grayscale/bw_flower.jpg

FileName: gs.jpg
FileSize: 9217
Format: JPEG
Width: 250
Height: 250
Depth: 8
StorageType: indexed
NumberOfColors: 256
ResolutionUnit: inch
XResolution: 72.000000
YResolution: 72.000000

Histogram and Thresholding

The histogram of the grayscale image was obtained using the following code:

//Jeric Tugaff
//Histogram

im=imread(‘grayscale.jpg’); //opens a 24 bit image
im=im(:,:,1);
imwrite(im(:,:), ‘gs.jpg’); //converts to 8 bit grayscale image
im=imread(‘gs.jpg’);
val=[];
num=[];
counter=1;
for i=0:1:255
[x,y]=find(im==i); //finds where im==i
val(counter)=i;
num(counter)=length(x); //find how many pixels of im have value of i
counter=counter+1;
end
plot(val, num); //plot. 🙂

We obtained this plot of the histogram for the image of a grayscale flower:

Grayscale image histogram

The two peaks in the histogram plot is a hint that the image is of high quality, and is good for tresholding. To do the thresholding, we use the code:

im=imread(‘gs.jpg’);
thresh=140;
im=im2bw(im, thresh/255);
imshow(im);

This results to this binary image:

Binary Image

Application To Getting Area of Images

tresholds

The images show the effect of tresholding to the leaf image. The name of the image corresponds to the treshold. We can see that the most appriate is using threshold value of 200/255. The method done in using the area follows from the previous exercise. For this activity, the area is  20108.5 while the theoretical area is  20505 with error of 1.9%.

//Jeric Tugaff
//Getting image areas through green’s theorem and grayscale image tresholding

im=imread(‘leaf_cropped.JPG’);
im=im2gray(im); //convert to grayscale
im=im2bw(im, 200/255);
im=1*(im==0); //inverts the image
[x,y]=follow(im);
x1=[];
x1(1:length(x)-1)=x(2:length(x));
x1(length(x))=x(1);
y1=[];
y1(1:length(y)-1)=y(2:length(y));
y1(length(y))=y(1);
area=0.5*abs(sum(x1.*y-y1.*x))  //green’s theorem
ta=0;
[sx, sy]=size(im);    //finds the area through recatangles
for i=1:sy
f=find(im(:, i)==1);
ta=ta+max(f)-min(f);
end
ta
err=(area-ta)/ta*100
//[r,s]=imhist(‘leaf_cropped.JPG’);

Permalink Leave a Comment

Activity 2: Measuring Areas Using Green’s theorm

June 19, 2008 at 10:53 am (Uncategorized) (, )

We use Green’s theorem in finding the area of a given regular polygon. Green’s theorems states that we can get the area of a region R bounded by a closed counter C using the formula

In Scilab, the closed counter can be traced using the ‘follow’ command from the SIP toolbox. The syntax of the command is:

[x,y]=follow(g);

which results to the lists x and y containing the x and y coordinates of the contour (or simply the edge) respectively. It should be noted, however, that follow uses a black (0) background and white (1) object. The coordinates x and y gives the coordinates of the white object nearest to the edge.

The theoretical value of the area is measured by counting the number of white pixels (valued 1). Since binary images are converted to ones and zeros by scilab, the theoretical value of the area is simply sum(g).

//Jeric Tugaff
//Activity 2
//compute for areas using green’s theorem

g=imread(‘C:\Documents and Settings\AP186user04\Desktop\AP186\Activity 2\square.bmp’);
[x,y]=follow(g);
ta=sum(g) // theoretical area
x1(1)=x(length(x));
x1(2:length(x))=x(1:length(x)-1);
y1(1)=y(length(y));
y1(2:length(y))=y(1:length(y)-1);
a=0.5*sum( x1.*y – y1.*x)

square

results yield an ares of 24964 pixel when using green’s theorem, while the theoretical value is 25281 with error of 1.2539061% for the square. For circle, theoretical is 30937 and actual is 30656. Finally, for triangle, theoretical is 20000 and actual is 19701.

Colloborators: Julie Mae Dado (for help in some scilab commands, and conversion of .jpg images to binary format), Benjamin Palmares (for the help in configuring the scilab), Mark Leo Bejemino (for the invalueble discussion), and to Lei Uy (for helping out which bmp format is suitable, which, by the way, is the 24-bit bmp format)

Rating: 10. Because the computed area is well within the accepted value of the error. 😀

Permalink 2 Comments

Activity 1: Digital Scanning

June 12, 2008 at 2:06 am (Uncategorized) ()

The goal of the activity is to reconstruct a hand-written graph by extracting data from pixel values of the data points. For this activity, Will be extracting data from this graph:

The image is opened using Microsoft Paint and the pixel values are obtained by pointing the cursor over a pixel and it’s location is shown in lower-right portion of the status bar.

We first obtained some values to be used in the calibration of our pixel values, like location of the graph origin (bias x and bias y), and the scale of x and y pixel values to relate the pixel location with physical variables.

Bias along x: 56 pixels
Bias along y: 433 pixels
Scale of x: 80 pixel / 5 units (days)
Scale of y: 34 pixels / 10 units (% percent killed)
Origin: (56, 33)
Image size: (723, 505)

To obtain the reconstructed x values from the pixel values, we use the equation:
x = (raw_x – bias_x) / scale_x = (raw_x-56)*5/80
To obtain the reconstructed y values from the pixel values, we use the equation:
y = (bias_y – raw_y) / scale_y = (433 – raw_y)*10/34

Pixel values of data points were obtained, and the physical values were calculated.

We can see that there is good reconstruction of the data with almost all points in the original graph and the reconstructed graph coinciding with each other.

Exercise 1 Superimposed Plot

Rating: 10
Because there is almost perfect correspondence in the values of the reconstructed and original graph.

Acknowledgment: Benjamin Palmares

Graph from the journal Plant Physiology 1926

Permalink 1 Comment

« Previous page