Wednesday, December 21, 2011

Automatic PDF Form Filler

Filling out forms is one of my least favorite activities; especially for programmers / IT people who are used to doing everything electronically, looking at that form with pen in hand somehow brings time to a crawl. The boxes are always too small, if there are mistakes, you need to reprint, and repeat the whole thing again. By hand.

Here is a collection of Python scripts that will allow you fill out a PDF form automatically. First, you need to convert the PDF to a collection of jpgs, using

python convert.py DOC.pdf [target dir]

In [target dir] you should now see DOC-0.jpg, DOC-1.jpg, etc.

Then, you need to identify box locations. For that use locs.py

python locs.py [target dir]/DOC-0.jpg

This brings up a UI tool; as you click on boxes, the coordinates of those boxes will be written to a [target dir]/DOC-0.jpg.loc file. Make sure you click on the boxes in a logical order, most forms specify a number on the page for each box anyway, use that order for instance.The coordinates are written to the loc file as you click, so once you are done, simply shut off locs.py.

Now in [target dir], start a new file called DOC-0.jpg.fill

This file will carry the values to be used to fill our your PDF form. Each line in this file should correspond to the line specified in DOC-0.jpg.loc. The line orders must match. You can manually tell fill.py to skip pixels in up or down direction by using e.g.

[down=40]bla bla bla

You can also use up, left, right commands. If you need to change the font size, e.g. for size 20 use [font=20].

Once that is done,

python fill.py [target dir]/DOC-0.jpg

This will use the loc file, fill file, and generate a final DOC-0.jpg-out.jpg

In this file you will see stuff from fill file placed in proper coordinates.

This tool uses ImageMagick, so make sure you install that first. Also, for the necessary Python libraries on Ubuntu you can use

sudo apt-get install python python-tk idle python-pmw python-imaging python-imaging-tk

An improvement to this code could be using a vision algorithm to automatically detect the location of each box. There is a certain visual pattern to a form -- words are in straight lines, there are big empty spaces in between, and the whole thing is usually surrounded by lines.

Download

Thursday, October 27, 2011

Optflow C++

Here is a slimmed down mainline C++ code that uses Seppo Pulkkinen's optflow library. This library uses CImg internally.

#include "CImg_config.h"
#include <CImg.h>
#include <sstream>
#include <string>

#include "DenseVectorFieldIO.h"
#include "DualDenseMotionExtractor.h"
#include "PyramidalLucasKanade.h"
#include "SparseVectorFieldIO.h"
#include "VectorFieldIllustrator.h"

using namespace cimg_library;

int main() {

CImg< unsigned char > I1("../examples/test1.png");
CImg< unsigned char > I2("../examples/test2.png");

const int W = I1.dimx();
const int H = I1.dimy();
CImg< unsigned char > I1_smoothed;
CImg< unsigned char > I2_smoothed;
CImg< unsigned char > motionImageF(W, H, 1, 3);
CImg< double > VF, VB;

I1_smoothed = I1.get_channel(0);
I2_smoothed = I2.get_channel(0);

motionImageF.get_shared_channel(0) = I1_smoothed * 0.75;
motionImageF.get_shared_channel(1) = I1_smoothed * 0.75;
motionImageF.get_shared_channel(2) = I1_smoothed * 0.75;

I1_smoothed.blur(3.0, 3.0, 3.0);
I2_smoothed.blur(3.0, 3.0, 3.0);

DenseMotionExtractor* e = new PyramidalLucasKanade(8,3,0.0025,0.0,4,true);
e->compute(I1_smoothed, I2_smoothed, VF, VB);
printf("%f\n",VF[100,100]);


return 0;
}

To compile drop this file under lib, run make, create the so, then compile as

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
/usr/bin/c++ -L. -Doptflow_EXPORTS -fPIC -I. -Wall -O2 -frounding-math \
-loptflow -o main main.cpp

Thursday, October 6, 2011

Mumford on Math

"Mathematicians believe in this Platonic universe in that, there is a pre-existing bunch of facts which are true and you never invent anything, you are discovering".

Saturday, October 1, 2011

Optical Flow, Lucas Kanade in Python

Following is the Lucas Kanade optical flow algorithm in Python. We used it successfully on two png images, as well as through OpenCV to follow a point in successive frames. More details are at Github.
import numpy as np
import scipy.signal as si
from PIL import Image

def gauss_kern():
   h1 = 15
   h2 = 15
   x, y = np.mgrid[0:h2, 0:h1]
   x = x-h2/2
   y = y-h1/2
   sigma = 1.5
   g = np.exp( -( x**2 + y**2 ) / (2*sigma**2) );
   return g / g.sum()

def deriv(im1, im2):
   g = gauss_kern()
   Img_smooth = si.convolve(im1,g,mode='same')
   fx,fy=np.gradient(Img_smooth)  
   ft = si.convolve2d(im1, 0.25 * np.ones((2,2))) + \
       si.convolve2d(im2, -0.25 * np.ones((2,2)))
                 
   fx = fx[0:fx.shape[0]-1, 0:fx.shape[1]-1]  
   fy = fy[0:fy.shape[0]-1, 0:fy.shape[1]-1];
   ft = ft[0:ft.shape[0]-1, 0:ft.shape[1]-1];
   return fx, fy, ft

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as si
from PIL import Image
import deriv
import numpy.linalg as lin

def lk(im1, im2, i, j, window_size) :
   fx, fy, ft = deriv.deriv(im1, im2)
   halfWindow = np.floor(window_size/2)
   curFx = fx[i-halfWindow-1:i+halfWindow,
              j-halfWindow-1:j+halfWindow]
   curFy = fy[i-halfWindow-1:i+halfWindow,
              j-halfWindow-1:j+halfWindow]
   curFt = ft[i-halfWindow-1:i+halfWindow,
              j-halfWindow-1:j+halfWindow]
   curFx = curFx.T
   curFy = curFy.T
   curFt = curFt.T

   curFx = curFx.flatten(order='F')
   curFy = curFy.flatten(order='F')
   curFt = -curFt.flatten(order='F')
 
   A = np.vstack((curFx, curFy)).T
   U = np.dot(np.dot(lin.pinv(np.dot(A.T,A)),A.T),curFt)
   return U[0], U[1]

Monday, August 22, 2011

Plotting a Complex Exponential

We rewrote one of the MIT OCW 18.03 ODE Mathlets in Python. This mathlet was for plotting complex exponentials.
from pylab import *

from matplotlib.widgets import Slider

ax = subplot(121)
subplots_adjust(left=0.1, bottom=0.25)
l1, = plot(None,None, lw=2, color='red')
axis([-1, 1, -8, 8])
title ('$(a + bi)t$', color='blue')
grid()

ax = subplot(122)
subplots_adjust(left=0.1, bottom=0.25)
l2, = plot(None,None, lw=2, color='red')
axis([-3, 3, -3, 3])
title ('$e^{(a + bi)t}$', color='blue')
grid()

axcolor = 'lightgoldenrodyellow'
axa = axes([0.15, 0.1, 0.65, 0.03], axisbg=axcolor)
axb = axes([0.15, 0.15, 0.65, 0.03], axisbg=axcolor)

slidera = Slider(axa, 'a', -1.0, 1.0, valinit=0)
sliderb = Slider(axb, 'b', -8.0, 8.0, valinit=0)

def update(val):
a = slidera.val
b = sliderb.val
t = arange(-1.0, 1.0, 0.001)
l1.set_xdata(t)
l1.set_ydata((b/a)*t)

t = arange(-3.0, 3.0, 0.001)
l2.set_xdata(exp(a*t)*cos(b*t))
l2.set_ydata(exp(a*t)*sin(b*t))
draw()

slidera.on_changed(update)
sliderb.on_changed(update)

show()


Saturday, July 23, 2011

Clustering, Image Segmentation, Eigenvectors and Python

Here is example code for eigenvector based segmentation in Python. For more details, see code here.
import matplotlib.pyplot as plt
import numpy as np

Img = plt.imread("twoObj.jpg")
n = Img.shape[0]
Img2 = Img.flatten(order='C')
nn = Img2.shape[0]

A = np.zeros((nn,nn))

for i in range(nn):
 for j in range(nn):
     A[i,j]=np.exp(-((Img2[i]-Img2[j])**2))

V,D = np.linalg.eig(A)
V = np.real(V)
a = np.real(D[0])
print a

threshold = 0 # filter
a = np.reshape(a, (n,n))
Img[a<threshold] = 255
plt.imshow(Img)
plt.show()



Saturday, April 9, 2011

Myers-Briggs Test in Javascript / Python

Hunch.com apparently uses this method - Myers-Briggs Test is a psychology, profile evaluation system, expanded upon later by David Keirsey. We coded the evaluation scheme in Python, making a few additions. In the original version in David Keirsey book Please Understand Me II, the answer to each question is either A, or B, these choices in Python code as -1, +1 then sum up appropiate array values. Recent versions of this questionaire carry more (sometimes even five) choices. We noticed an additional 'neutral' choice can made an improvement, our version carries 3 answers. Eval code substitutes -1 for A, +1 for B, and 0 for neutral.

An example of the evaluation algorithm Keirsey uses in his book is above. We simply generate indexes that correspond to the columns seen above (answers arrive in a straight list, numbered from 1 to 70), then do the addition.
def calculate_mb(choices):

  new_choices = []

  for i in range(1,8):
      new_choices.append([int(choices[j-1]) for j in range(i,71,7) ])

      res = list("XXXX")

  ei = sum(new_choices[0])
  if ei < 0: res[0] = 'E'
  else: res[0] = 'I'

  sn = sum(new_choices[1]) + sum(new_choices[2])
  if sn < 0: res[1] = 'S'
  else: res[1] = 'N'

  tf = sum(new_choices[3]) + sum(new_choices[4])
  if tf < 0: res[2] = 'T'
  else: res[2] = 'F'

  jp = sum(new_choices[5]) + sum(new_choices[6])
  if jp < 0: res[3] = 'J'
  else: res[3] = 'P'

  logging.debug(choices)

  return str(''.join(res))

Another version, in HTML using Javascript, can be found here.

Monday, March 21, 2011

ARIMA, Forecasting and Python

I ported the R code found on Rob Hyndman's blog into Python + rpy2. The latter package allows calling of R code from Python which we used here to utilize the forecast package. Test R and Python codes can also be found in the zip file below.
import rpy2.robjects as R
import rpy2.rinterface as rinterface
from rpy2.robjects.packages import importr
import rpy2.robjects.numpy2ri
from rpy2.robjects.vectors import FloatVector, StrVector
import datetime
import numpy as np
forecast = importr("forecast")
R.r.source("arima-fn.R")

x = np.loadtxt("garan.dat",  skiprows=1)
n = len(x)
m = 10
terms = 4

x = np.insert(x, 10, np.nan) # put in a NaN to see what happens
n += 1

print x

R.r.assign('x',x)
R.r.assign('n',n)
R.r.assign('m',m)
R.r.assign('terms',terms)

R.r('fit = Arima(x=x, order=c(2,0,1), xreg=fourier(1:n,terms,m))')
R.r('f = forecast(fit, h=2*m, xreg=fourier(n+1:(2*m),terms,m))')
res = R.r('forecast_results(f)')
for x in res:
  print x

All Code

arima-fourier2.R

forecast_results <- function(res)
{
  return (res$mean)
}

fourier <- function(t,terms,period)
{
  print (terms)
  n <- length(t)
  X <- matrix(,nrow=n,ncol=2*terms)
  for(i in 1:terms)
  {
    X[,2*i-1] <- sin(2*pi*i*t/period)
    X[,2*i] <- cos(2*pi*i*t/period)
  }
  colnames(X) <- paste(c("S","C"),rep(1:terms,rep(2,terms)),sep="")
  return(X)
}

library(forecast)

yy <- read.table ("garan.dat", header=T)
y <- yy[,'price']

n <- length(y)
print (n)
m <- 10 # how many points in the future
terms = 4

fit <- Arima(y, order=c(2,0,1), xreg=fourier(1:n,terms,m))

f = forecast(fit, h=2*m, xreg=fourier(n+1:(2*m),terms,m))

print (forecast_results(f))

plot(f)

arima-fouerier.R

n <- 2000
m <- 200
y <- ts(rnorm(n) + (1:n)%%100/30, f=m)
print (y)
quit()
fourier <- function(t,terms,period)
{
  n <- length(t)
  X <- matrix(,nrow=n,ncol=2*terms)
  for(i in 1:terms)
  {
    X[,2*i-1] <- sin(2*pi*i*t/period)
    X[,2*i] <- cos(2*pi*i*t/period)
  }
  colnames(X) <- paste(c("S","C"),rep(1:terms,rep(2,terms)),sep="")
  return(X)
}

library(forecast)
fit <- Arima(y, order=c(2,0,1), xreg=fourier(1:n,4,m))
f = forecast(fit, h=2*m, xreg=fourier(n+1:(2*m),4,m))
plot(f)

print (y)
print (f)

arima.py

import rpy2.robjects as R
import rpy2.rinterface as rinterface
from rpy2.robjects.packages import importr
import rpy2.robjects.numpy2ri
from rpy2.robjects.vectors import FloatVector, StrVector
import datetime
import numpy as np
forecast = importr("forecast")
R.r.source("arima-fn.R")

x = np.loadtxt("garan.dat",  skiprows=1)
n = len(x)
m = 10
terms = 4

x = np.insert(x, 10, np.nan) # put in a NaN to see what happens
n += 1

print x

R.r.assign('x',x)
R.r.assign('n',n)
R.r.assign('m',m)
R.r.assign('terms',terms)

R.r('fit = Arima(x=x, order=c(2,0,1), xreg=fourier(1:n,terms,m))')
R.r('f = forecast(fit, h=2*m, xreg=fourier(n+1:(2*m),terms,m))')
res = R.r('forecast_results(f)')
for x in res:
    print x

garan.dat

price
6.40000
6.33000
6.43000
6.53000
6.48000
6.33000
6.28000
6.33000
6.33000
6.18000
6.28000
6.28000
6.43000
6.48000
6.23000
6.38000
6.33000
6.23000
6.45000
6.28000
6.38000
6.43000
6.45000
6.23000
6.04000
5.84000
5.94000
5.84000
5.95000
5.89000
5.94000
6.00000
6.35000
6.13000
6.38000
6.23000
6.04000
5.79000
5.49000
5.70000


Sunday, March 13, 2011

Osher on Mathematicians and Engineers

From Stanley Osher's talk:
Main thing to do as applied mathematicians it to be willing to listen to the other person's language, try to understand and translate it into something that makes sense mathematically. Engineers are very clever, when you talk to engineers in many different fields, they have an algorithm that works, they don't know why it works, it's not rigorous, but it's great. Then your job is to figure out how to generalize it and make mathematics out of it. That happened [to us] many, many times. Lots of fantastically successful algorithms came from the intuition of engineers, then they were made precise later by mathematicians.

Stanley Osher: Mathematician with an Edge

From levelset.com

Stanley Osher is an extraordinary mathematician who has made fundamental contributions to applied mathematics, computational science and scientific computing and who has cofounded three companies based, in part, on his research. He has applied his pioneering work on level set methods and other numerical methods for partial differential equations to the field of image processing and, in particular, to video image enhancing and movie animation. He has been featured prominently in the scientific and international media such as Science News, Die Zeit and Los Angeles Times. He is perhaps the most highly cited researcher in the field of scientific computing.

He received the NASA Public Service Group Achievement Award, Japan Society of Mechanical Engineers Computational Mechanics Award and the SIAM Pioneer Prize. He was an invited speaker at the International Congress of Mathematicians.

He is currently Director of Special Projects at the Institute for Pure and Applied Mathematics (IPAM) at the University of California at Los Angeles and Director of Applied Mathematics.

The Editor of Imprints interviewed Stanley Osher on 17 December 2003 when he was an invited guest at the Institute's program on image processing and gave a public lecture. The following is based on an edited transcript of an interview in which he talked about the fascination of applied mathematics and his total dedication to research and applications.

I: Thank you very much, Professor Osher, for agreeing to be interviewed. In which area did you do your Ph D?

O: I did my Ph D in an esoteric area in functional analysis, which is in pure mathematics. I left it immediately and switched to numerical analysis after my thesis. I was lucky enough to talk to some people, including Peter Lax, who suggested the numerical stuff.

I: Did you find your real inclinations in applied mathematics?

O: Yeah, but it did not happen until after I got my degree. I liked everything and I specialized more after my Ph D.

I: Did you use functional analysis later on in your work?

O: Yes. The first thing I did in numerical analysis was an application of Toeplitz Matrices. It used functional analysis and was short and elegant.

I: You switched to applied mathematics and then eventually to something very practical like applying to movie animation.

O: This just happened - that's the way research leads you. You cannot predict these things. Together with colleagues and students, I was using the level method, which is a way of determining how surfaces such as bubbles move in three dimensions, how they merge and so on. You can simulate the flow of bubbles, planes and things like that. It happened that people in the movie industry got interested in this stuff.

I: Are these are pure mathematics problems?

O: It's a way of representing surfaces and has connections with differential geometry. People prove theorems about these things. They have applications in many areas including fluid dynamics and quantum mechanics. They arise in the movie industry

because you want to see how things merge and split like in explosions or rising bubbles. You know, our Governor, Schwarzenegger, used in his latest movie, a lot of these methods done by my former student Ron Fedkiw.

I: Did you have to use the mathematics of many different areas to solve those problems?

O: Sure. The key thing I would say is partial differential equations, elementary differential geometry, numerical analysis. The language of applied mathematics is differential equations.

I: How did you get into the movie animation business?

O: We had a week of movie industry people coming into UCLA (University of California at Los Angeles) giving lectures about what they did and, in fact, imaging science was highlighted by the American Mathematical Society one year. There was a week in that stuff. We invited people from the local movie industry and they were interested in what we were doing. They wound up arguing with me. The water in the "Titanic", which won many Academy awards, was very bad. It was old-fashioned stuff. Their people came to talk about that and so we decided we could do better than that. In recent movies, the water is much more realistic. The first movie that actually used sophisticated water was "Ants". Now level sets are used by movies like "Shrek", "Terminator" and many blockbusters. My former student Ron Fedkiw is doing this movie animation stuff very well. The stuff that my colleagues and I do has applications in many other places besides the movie industry.

I: I believe that another of you dramatic achievement is the use of mathematics in catching criminals. How did this come about?

O: Well, I was living in Los Angeles when the city went up in smoke. There was a big riot in Los Angeles after this guy (Rodney King) was beaten up by the police.

The riot resulted in people being arrested for looting and beating up passers-by. There was a video recording of the bad guys beating up truck driver Denny and it showed a speck on the arm of a man throwing a brick at Denny.

It turned out that I had a friend who knew the District Attorney or somebody, and I

was then doing video image enhancement with my colleague L. Rudin, We were able to resolve the speck into a rose tattoo and it was a great application of what we were doing. After the Denny case trial (the tattoo led to the conviction of the suspect) we had a lot of media publicity and our company specialized in the area of image enhancement. Eventually I sold my share of the company to Rudin. He has a package on video image enhancement which is used by the police around the world, and he's quite successful.

I have left this business. It was quite fun and related to mathematics. Image processing is the real world and the graphics that is manufactured is the fake world. You want to find out what the image really is.

I: Do you actually go and seek out those problems or do people come looking for you?

O: It's hard to say. Sometimes it's serendipity. Things just happen. I was lucky. In all my years of doing science, I managed to work with the right people who knew what the problems were. For example, I knew nothing about image processing at all after my PhD. Then this guy Rudin came over to me and asked me about some work I had done in fluid dynamics on supersonic flow and shock waves. I asked him what he wanted to know and I got fired up. He was a computer scientist and he realized that shock waves had something to do with imaging. It was a fantastic observation and our collaboration worked out well.

I: The scope of movie animation has just opened up, hasn't it?

O: Yeah. But I'm not sure this is the best field of application for somebody to work in because the market is small except for video games which is a big business. But video games require real time imaging. What we do is not real time; it's too slow. That might change.

I: It could be just a matter of computing power.

O: Yeah, yeah, and also hardwiring in level set methods, which may come in time.

I: Pattern recognition used to be sort of very big.

O: I'm not sure of the definition of pattern recognition, but image processing is related to it. The key idea in everything we do is about "edges", which characterize images. Edges, and now, textures. If you look at a table, for example, the flat part is not very interesting but the boundary of it is. It's the discontinuity. If I look at you, I can see you because of the outline. The outline is very important and the mathematics that was used in other areas of science like fluid dynamics specialise in things like edges and boundaries. And now images.

I: Does it mean that if you have a vague or blurred object, you can always refine it?

O: Yeah, but it's usually difficult to do so when you have edges because near the edges you will get spurious oscillations. The techniques we have developed, which came originally from fluid dynamics, are now useful for removing those unreal artifacts.

I: How do you know that what you get is the actual object?

O: That's a good question. But you can get enough science behind it and you can prove theorems, algorithms converge and stuff like that.

I: You look at the picture and it's so blurred. And then you do this and you get that. There's a lot of faith in that.

O: I won't disagree with that completely. But when I drop something, I expect it to fall and not go up. The probability is not zero that it might go up. But with a high degree of certainty, you can say that this is a realistic picture.

I: Does that mean there is a probabilistic element in it.

O: Oh, yes. That's always true with nature.

I: For example, in the police case, couldn’t the guy dispute your picture?

O: Yes, but the jury believed us. Fingerprints are also subjective. Fingerprints are very good examples of image processing of a certain sort.

I: What about problems in voice recognition?

O: It's a different kind of mathematics. I'm not an expert in it. The techniques we use in image processing involving differential equations have never been used for sound. But I understand there's some very new stuff along these lines. It's only beginning and just developing. The Institute for Pure and Applied Mathematics (IPAM) at UCLA is running a program on sound and how the ear works and the related mathematics.

I: What about artificial intelligence?

O: It's highly dubious. Among the things I do is designing computer chips and you use mathematics which is not intuitive. Engineers cannot intuit what the right shape should be. One of my engineering friends said this is artificial intelligence. The computer would do something the engineer cannot imagine. That's not the usual definition of artificial intelligence. If you have solid mathematics, you can do things that are intuitively not obvious. But the experienced engineer makes a leap of understanding. If you call that artificial intelligence, so be it. But it’s not the kind that people use to talk about.

I: The imaging business is also very important in astronomy, isn't it? They take pictures which are so faint.

O: Very much so. Astronomers have done very good work in this area. They had to over the years. The early work in this stuff in astronomy was very fascinating. They did things which are precursors of what is going on now.

I: You mean the astronomers actually did something mathematical?

O: Yes. That's very often the case. The good thing about being an applied mathematician is that when you work in different areas you find very brilliant people in other areas of science develop mathematical algorithms without realising what they are doing and which can be generalized to other areas. So it's a question of language.

I: Are more astronomers talking to mathematicians now?

O: Actually we'll be running a program on computational astronomy in IPAM a year from now. Astronomy is just one example. Throughout science mathematics is playing more and more of the key role. In the Institute in which I am involved, its mission is to do interdisciplinary work. People from different areas of science have problems which they think are mathematical. And our goal is to make mathematics out of it. Now they believe that we can do something, mainly because of the computer.

I: That's interesting. So now mathematics is also contributing towards understanding the origin of the universe. This is something not many people are aware of.

O: Well, I'm not an expert in the field, but yes, absolutely. I think that the world is governed by differential equations.

I: Your work involves a lot of algorithms. Do you invent the algorithms?

O: That's what we do. That's the fun part of it. I was interviewed by the LA Times (so was Tony Chan for the same article) and I said that I wrote the algorithms that make the computer sing. I am the Barry Manilow of mathematics.

I: Talking about algorithms, some people consider algorithms to be inventions.

O: Yeah, they are. You can actually patent them now. It used to be that a patent has to contain a device with wires and everything. But I understand the US Patent Office is now more liberal. I'm no lawyer, but I know that as the years go by, the Patent Office seems more and more interested in giving legal protection to algorithms.

I: You could be rich. Hollywood would be paying you millions.

O: People work for salaries. There is money, ego and fun. It's a very nonlinear function. I don't know which is most important. Fun is very important. It's a very good life. I would recommend people going into this stuff now. If you have the talent for it, it's the best life.

I: It's something very different, something, how do you say, non-academic?

O: In some sense, yes. You learn things, you read stuff and you learn new ideas, and you are fired up. Sometimes you deliver something different from what you have found. You have a vague idea that something interesting is going to come up. You wander around and something happens. Then you get very excited. It's like opening a door and you don't know what good things are behind it. You're not sure where it's going to end and what the level of success it's going to be. It's very exciting. Everyday I can't wait to go to work. People often asked me, “What kind of life is this that work is so important?” People go on vacation. My work is vacation.

I: Do you have many Ph D students?

O: Many, disproportionately Chinese. We have many good Chinese students at UCLA.

I: Have you used your methods for something more serious like weather forecasting and earthquake prediction?

O: Yeah. The differential equations and some stuff I used to do in fluid dynamics and done by many other very good people are absolutely useful in weather prediction. I still do work on explosives and multi-phase flows and ray tracing. Physical phenomena apart from imaging is very much a part of my research with my colleagues and students.

I: What about the theory of turbulence?

O: Ah, turbulence is too dangerous. If you touch turbulence, you get burnt. One of my mentors once said he had great respect for people in turbulence, which is far more than they have for each other. Turbulence is too controversial. Turbulence has a probabilistic aspect to it, it's statistical and that's not my thing.

I: It's not algorithmic?

O: It's a different kind of algorithm, a different kind of intuition. I do understand it, but I've never been a big user of that stuff. It's very dangerous even to discuss it, it's like politics. Everybody condemns the other guy's theory. It's somewhat like religion.

I: That's surprising. The methods are mathematical, the equations are mathematical.

O: Yeah. The problems are very hard and complicated. You get some very good mathematics coming out of it but I'm not sure the physics is understood in analysing what really happens.

I: I'd like to ask you a philosophical question. How does your work affect your view of life?

O: In terms of how research affects my philosophy? The basic idea is to try to make order out of this life that we live. Everyday you encounter things and it's a messy world. The goal is to take this mess that we see and somehow "mathematize" it and make a prediction. In that sense, research has certainly affected my philosophy. I try to figure out what is going on. The most complicated thing is how our human nature operates. It will be fun to understand that. Many people I know at UCLA and elsewhere are using medical imaging to understand how the topology and shape of the brain affects its function. They use the mathematics that I am involved in. The greatest mystery of all is human behaviour and maybe it can be explained by level sets.

I: Would you say that your present interest and activity is, in some way, directed by your own personal philosophy towards finding order?

O: Yeah, absolutely. I came from being a poor boy in Brooklyn. I wanted some order in my life, to become middle-class and to have a life that I enjoy. Then I stumbled onto this thing, and wow, that's very good for creating order. I entered graduate school in New York University in 1962 when it was a fantastic place for applied mathematics, maybe one of the best ever in the world,, All the top people from Goettingen wound up in New York and it was so exciting.

I: Was that the Courant Institute?

O: Yes, the Courant Institute. In 1962, when I entered, it was incredible. The people who were there and the atmosphere. You felt that you were doing something important. It had a very great influence on me and many other people.

I: Was Courant still there?

O: He was still around, old but still functioning. He added many people all of whom I thought of being old, something like 20 years younger than I am now. They were great people and it was a fantastic time. There were many people like me who were ethnic New Yorkers and who view becoming a mathematician as a way of becoming middle-class American citizens.

I: What attracted you to UCLA?

O: In truth, there was a guy who I was working with: Andy Majda. He is an excellent applied mathematician. He was there and they were building an applied mathematics group. Also I liked California. When I was in New York, I was always dreaming about the Beach Boys. Sunshine and California together with math was great - everything that I wanted. Over the years we had very nice people. The atmosphere is extremely good in UCLA. People who visited us commented on how well we got along each other, which is unusual in academia, It has worked out quite well.

Thursday, March 10, 2011

Image Segmentation using Active Contours, Level Sets

Here is a new image segmentation Python code which is a port of this Matlab file. It uses level sets and mean curvature motion, and is able to segment the sample image after few iterations. We had to write our own bwdist which is a wrapper for distance_transform_edt wrapper so bwdist acted the same way as Matlab bwdist(), some conversions were necessary to make this work. We also changed the initialization of \phi, making it simpler and closer to Dr. Li's code we shared before.

Code

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
import scipy.ndimage as image
import time
from scipy import ndimage

def bwdist(a):
   """
   this is an intermediary function, 'a' has only True, False vals,
   so we convert them into 0, 1 values -- in reverse. True is 0,
   False is 1, distance_transform_edt wants it that way.
   """
   b = np.ones(a.shape)
   b[a==True] = 0.
   return ndimage.distance_transform_edt(b)

def gauss_kern():
   """ Returns a normalized 2D gauss kernel array for convolutions """
   h1 = 15
   h2 = 15
   x, y = np.mgrid[0:h2, 0:h1]
   x = x-h2/2
   y = y-h1/2
   sigma = 1.5
   g = np.exp( -( x**2 + y**2 ) / (2*sigma**2) );
   return g / g.sum()

Img = plt.imread("twoObj.bmp")
Img = Img[::-1]
g = gauss_kern()
Img_smooth = signal.convolve(Img,g,mode='same')
Iy,Ix=np.gradient(Img_smooth)
absGradI=np.sqrt(Ix**2+Iy**2);
rows, cols = Img.shape

# initial function phi - level set is a square 4 pixels
# away from borders on each side, in 3D it looks like an empty
# box
c0=4
w=4
nrow, ncol=Img.shape
phi=c0*np.ones((nrow,ncol))
phi[w+1:-w-1, w+1:-w-1]=-c0

# edge-stopping function
g = 1 / (1+absGradI**2)

# gradient of edge-stopping function
gy,gx = np.gradient(g)

# gradient descent step size
dt=.4

# number of iterations after which we reinitialize the surface
num_reinit=10

phiOld=np.zeros((rows,cols))

# number of iterations after which we reinitialize the surface
iter=0

plt.ion()

while np.sum(np.sum(np.abs(phi-phiOld))) != 0:
   # gradient of phi
   gradPhiY, gradPhiX = np.gradient(phi)   
   # magnitude of gradient of phi
   absGradPhi=np.sqrt(gradPhiX**2+gradPhiY**2)
   # normalized gradient of phi - eliminating singularities
   normGradPhiX=gradPhiX/(absGradPhi+(absGradPhi==0))
   normGradPhiY=gradPhiY/(absGradPhi+(absGradPhi==0))
  
   divYnormGradPhiX, divXnormGradPhiX=np.gradient(normGradPhiX)
   divYnormGradPhiY, divXnormGradPhiY=np.gradient(normGradPhiY)
                         
   # curvature is the divergence of normalized gradient of phi
   K = divXnormGradPhiX + divYnormGradPhiY   
   tmp1 = g * K * absGradPhi
   tmp2 = g * absGradPhi
   tmp3 = gx * gradPhiX + gy*gradPhiY
   dPhiBydT =tmp1 + tmp2 + tmp3
   phiOld=phi
   # level set evolution equation   
   phi = phi + ( dt * dPhiBydT )
   iter=iter+1
   if np.mod(iter,num_reinit)==0:
       # reinitialize the embedding function after num_reinit iterations
       phi=np.sign(phi)
       phi = (phi > 0) * (bwdist(phi <> 0))
      
   if np.mod(iter,10)==0:
       time.sleep(0.6)
       plt.imshow(Img, cmap='gray')
       plt.hold(True)
       CS = plt.contour(phi,0, colors='r')
       plt.draw()
       plt.hold(False)

Thursday, February 3, 2011

Very Applied Math

From John D. Cook's site
“Applied math” has come to mean math that could be applied someday. I use the term “very applied math” to denote that subset of applied mathematics that actually is applied to real problems.

My graduate advisor used to say “Applied mathematics is not a subject classification. It's an attitude.” By that token, I'd say that very applied math is also an attitude, an interest in the grubby work required to see the math actually used and a willingness to carry it out. This involves not just math but also computing, consulting, managing, marketing, etc.

Sunday, January 30, 2011

Heat Equation solution using Finite Difference and Crank-Nicholson

I rewrote parts of this code so that it used dense (non-sparse) matrices instead of sparse matrices, for demonstration purposes. The code is below:
import numpy as np
import scipy.linalg

# Number of internal points
N = 200

# Calculate Spatial Step-Size
h = 1/(N+1.0)
k = h/2

x = np.linspace(0,1,N+2)
x = x[1:-1] # get rid of the '0' and '1' at each end

# Initial Conditions
u = np.transpose(np.mat(10*np.sin(np.pi*x)))

# second derivative matrix
I2 = -2*np.eye(N)
E = np.diag(np.ones((N-1)), k=1)
D2 = (I2 + E + E.T)/(h**2)

I = np.eye(N)

TFinal = 1
NumOfTimeSteps = int(TFinal/k)

for i in range(NumOfTimeSteps):
# Solve the System: (I - k/2*D2) u_new = (I + k/2*D2)*u_old
A = (I - k/2*D2)
b = np.dot((I + k/2*D2), u)
u = scipy.linalg.solve(A, b)
Once the code is finished, the last u will have the final values. More detailed code along with the sparse matrix version is here.

Thursday, January 20, 2011

Chan Vese Segmentation Algorithm

Attached is the Chan-Vese segmentation algorithm written in C++ by Robert Crandall. Just run make then run the binary "segment" which will process the test.pgm file and create CVoutEdges.pgm and CVoutRegions.pgm files. In the attached zip you can also find his project paper file ECE532_ProjectPaper.pdf.

Code

Saturday, January 15, 2011

Discrete Laplace Transform - del2

I ported Matlab del2() call into Python. Here it is for reference:
import numpy as np

def del2(M):
dx = 1
dy = 1
rows, cols = M.shape
dx = dx * np.ones ((1, cols - 1))
dy = dy * np.ones ((rows-1, 1))

mr, mc = M.shape
D = np.zeros ((mr, mc))

if (mr >= 3):
## x direction
## left and right boundary
D[:, 0] = (M[:, 0] - 2 * M[:, 1] + M[:, 2]) / (dx[:,0] * dx[:,1])
D[:, mc-1] = (M[:, mc - 3] - 2 * M[:, mc - 2] + M[:, mc-1]) \
/ (dx[:,mc - 3] * dx[:,mc - 2])

## interior points
tmp1 = D[:, 1:mc - 1]
tmp2 = (M[:, 2:mc] - 2 * M[:, 1:mc - 1] + M[:, 0:mc - 2])
tmp3 = np.kron (dx[:,0:mc -2] * dx[:,1:mc - 1], np.ones ((mr, 1)))
D[:, 1:mc - 1] = tmp1 + tmp2 / tmp3

if (mr >= 3):
## y direction
## top and bottom boundary
D[0, :] = D[0,:] + \
(M[0, :] - 2 * M[1, :] + M[2, :] ) / (dy[0,:] * dy[1,:])

D[mr-1, :] = D[mr-1, :] \
+ (M[mr-3,:] - 2 * M[mr-2, :] + M[mr-1, :]) \
/ (dy[mr-3,:] * dx[:,mr-2])

## interior points
tmp1 = D[1:mr-1, :]
tmp2 = (M[2:mr, :] - 2 * M[1:mr - 1, :] + M[0:mr-2, :])
tmp3 = np.kron (dy[0:mr-2,:] * dy[1:mr-1,:], np.ones ((1, mc)))
D[1:mr-1, :] = tmp1 + tmp2 / tmp3

return D / 4

Level Sets and Image Segmentation with Python

I ported the Matlab code that was written by Li, Xu, Gui and Fox for their paper Level Set Evolution Without Re-initialization: A New Variational Formulation, into Python. Li et al. use level sets with a new approach and are able to segment an image succesfully. Some screenshots as well as the code itself is below.







Code

Friday, January 14, 2011

Gaussian Convolution, Blurring using Python

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal

def gauss_kern():
""" Returns a normalized 2D gauss kernel array for convolutions """
h1 = 15
h2 = 15
x, y = np.mgrid[0:h2, 0:h1]
x = x-h2/2
y = y-h1/2
sigma = 1.5
g = np.exp( -( x**2 + y**2 ) / (2*sigma**2) );
return g / g.sum()

Img = plt.imread("twoObj.bmp")
Img = Img[::-1]
g = gauss_kern()
Img_smooth = signal.convolve(Img,g,mode='same')
plt.imshow(Img_smooth)
plt.show()
Results:

Before Image



After Image