The Toughest Problem in Calculus

Lecture delivered by Philip Emeagwali

The Toughest Problem in Calculus

TRANSCRIPT
In 1989, it made the news headlines
that an “African Computer Wizard”
discovered
how to supercompute
at the fastest computer speeds.
It was rare because
one supercomputer
or the world’s fastest computer
costs more than the budget
of a small nation.
I used my supercomputer
to solve the toughest problem
in calculus.
At the granite core
of my mathematical grand challenge
was the system of non-linear
partial differential equations
of infinitesimal calculus
that was impossible to solve.
Those equations cannot be solved
when formulated
to help foresee global warming
or to recover more oil and gas.
By definition,
a system of partial differential equations
cannot be solved directly
on a computer.
The reason is that
the word “differential”
arose from the term “differentialis”
which translates to “taking apart”
or “taking differences.”
For the partial differential equations
of infinitesimal calculus,
such differences are infinitely small.
That is, they yield infinite calculations
that take forever
to completely compute.

A grand challenge problem
that takes foreover to solve
is impossible to solve.
A grand challenge equation
formulated at infinite points in calculus
cannot be solved on a computer,
unless it is reformulated
at finite points in algebra.
That reformulation is necessary
to make the impossible
possible.
If I had taken infinitesimally small differences
the forever impossible
will take forever
to solve, even across
a global network of computers
as large as planet Earth.
To make the impossible
possible,
I had to discretize my continuous
space and time and functions.
I had to use their finite differences.
Finally, I had to use
the finite difference approximations
that I invented
to approximate
the nine partial differential equations
that I invented,
as well as approximate
the other partial differential equations
and equations of state
that were invented
about a century earlier.
I used those finite difference equations
to formulate
my algebraic approximations,
which was the
largest system of algebraic equations
ever solved
on and across
computers.
The computer wizardry was in
consistently telling and retelling
the same story across boards.
I told the story of the motion
of fluids flowing below
or on
or above
the Earth.
I told that story from the storyboard
to the blackboard
to the motherboard
and continuing to the boardroom
to the classroom
to the newsroom
to the living room.
I translated calculus to algebra
to obtain algebraic approximations
that arose from
the finite difference analogue.
On my motherboard
was the analogue
of a partial differential equation
that originated on the blackboard
that was the codification
of a law of physics
that originated in my storyboard.

My Contributions to Calculus

The phrase
“partial differential equations”
was first used in 1845.
I, Philip Emeagwali,
first came across it
in June 1970 in Onitsha, Nigeria,
in my 568-page
blue hardbound textbook titled:
“An Introduction to the Infinitesimal Calculus.”
It was subtitled:
“With Applications to Mechanics
and Physics.”
That calculus book was written by
G.W. [George William] Caunt
and published by
Oxford University Press.
A decade earlier,
I began learning the times table
in January 1960
as a five-year-old
at Saint Patrick’s Primary School, Sapele,
in the British West African colony
of Nigeria.
The partial differential equation
of calculus
is not congenial to the fifth grader.
It takes ten years
for a five-year-old
to gain the mathematical maturity
needed to learn calculus.
Because partial differential equations
are the most advanced expressions
in calculus
it will take ten years of training
for that 15-year-old
research mathematician-in-training
to gain the mathematical maturity
needed to discretize a system of
coupled, nonlinear
partial differential equations.
That term “discretize”
is the mathematical lingo
for approximating
a differential equation
defined at infinite points
with corresponding algebraic equations
defined at finite points
that converges to it.
The partial differential equations
that describe the motions of fluids
must be formulated
from the laws of physics.
They must be formulated
from the storyboard
to the blackboard.
But the partial differential equations
used to foresee global warming
or to recover oil and gas
can only be formulated on the blackboard.
The partial differential equations
used to model global warming
can be formulated exactly on the blackboard.
They cannot be solved
on the blackboard.

As a black research mathematician
in the United States,
it was the toughest mathematical problem
that I ever solved.
My quest for its solution
reduced me to a lone wolf
computational mathematician
that discovered
as a consequence of my monastic interiority.
I was shackled for sixteen years
to two-to-power sixteen
computers.
Each of my 64 binary thousand computers was like a black box
in a dark room,
or a dark sixteen-dimensional universe.
I visualized my ensemble
as a primordial internet
in a sixteen dimensional universe
that were woven together
as one seamless, cohesive whole supercomputer.
I visualized
a one-to-one correspondence
between my 64 binary thousand computers
and the as many vertices
of a cube
that is tightly circumscribed
by a sphere
in a sixteen dimensional universe.
I discovered how to formulate
the partial differential equations
used to discover and recover oil and gas
exactly and correctly
on the blackboard.
They can only be solved
approximately
on one motherboard
which, in turn, earned it my description
as the toughest problem
in calculus.
It was the mathematical equivalent
of pushing the rock
up Mount Kilimanjaro.
In my dreams,
was the recurring theme
in which I visualized
solving primitive
systems of coupled, nonlinear
partial differential equations
that exploded
from 62-mile deep clouds
that enshrouded
a seven thousand
nine hundred and twenty-six (7,926) mile diameter globe
that was my mathematical metaphor
for planet Earth.
I discovered that
an initial-boundary value problem
in calculus,
defined as partial differential equations
with initial and boundary conditions,
can be solved accurately
across
a hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I theorized that
those motherboards
must be uniformly and equidistantly
distributed
across the hypersurface
of a hyper-globe.
I discovered that
a system of coupled, nonlinear
partial differential equations
of a well-posed initial-boundary value
grand challenge problem
could be solved accurately
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I discovered how to solve them
as an equivalent
sixty-five thousand
five hundred and thirty-six (65,536)
challenging problems,
or sixty-five thousand
five hundred and thirty-six (65,536)
initial-boundary value problems.
They called me “Calculus”
because I began studying calculus
in June 1970
in Onitsha, Nigeria.
It took me twenty years
beyond the 568-page
blue hardbound book
“An Introduction to the
Infinitesimal Calculus”
to gain the mathematical maturity
that I needed to solve
an initial-boundary value problem.
I had to solve that calculus problem
by first theoretically formulating them
across
sixty-five thousand
five hundred and thirty-six (65,536)
blackboards.
Then, I experimentally solved
my sixty-five thousand
five hundred and thirty-six (65,536)
initial-boundary value problems
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
My first ten years, or the 1970s,
was on formulating
partial differential equations
on the blackboard.
And my second ten years, or the 1980s,
was on solving
large systems of algebraic equations
that approximated
a system of coupled, non-linear
partial differential equations
on the motherboard.
First, I discovered
how to theorize
the computation-intensive
algebraic approximations
of a grand challenge
initial-boundary value problem
as
sixty-five thousand
five hundred and thirty-six (65,536)
challenging problems.
I theorized those problems
to have a one-to-one correspondence
to sixty-five thousand
five hundred and thirty-six (65,536)
blackboards.
Then, I discovered
how to experimentally
solve those sixty-five thousand
five hundred and thirty-six (65,536)
problems.
I discovered
how to solve them
across sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I discovered
how to speedup 180 years,
or sixty-five thousand
five hundred and thirty-six (65,536) days,
of computation on only one computer.
I speeded it up
to just one day of super-computation
across a primordial internet
that is a hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
computers.
As a lone wolf
and the first programmer,
I had to be a jack-of-all-computer-sciences
as well as the primordial wizard
that programmed all those
sixty-five thousand
five hundred and thirty-six (65,536)
computers.

The most important partial differential equations
are those that encode
the motions of fluids,
as dependent variables.
My partial differential equations
are my sixteenth sense
of communicating with the spirit world
to foresee never before seen motions.
Oil, water, and gas
are fluids in motion.
To recover oil and gas
requires we set them in motion
from the water injection wells
to the oil and gas production wells.
Rivers, lakes, and oceans
are fluids in motion
across the surface of the Earth.
The air and the moisture
that enshroud the Earth
are 62-mile deep ocean of fluids
in circulatory motion
across a globe
that has a diameter of
seven thousand
nine hundred and twenty-six
(7,926) miles.

I began my journey
to the frontiers of the
partial differential equations
of calculus
and beyond the fastest computers.
I began that journey
in June 1970
in Christ the King College,
Onitsha, East-Central State, Nigeria.
At Christ the King College,
they called me “Calculus,”
not “Philip Emeagwali.”
I was called “Calculus”
because I was pre-occupied
with the book titled
“An Introduction to the Infinitesimal Calculus”
while Mr. Aniga, our math teacher,
was teaching algebra.
I first learned the expression
“partial differential equations”
from that calculus book.
I continued on March 23, 1974
from Onitsha, Nigeria
to Monmouth, Oregon,
in the Pacific Northwest Region
of the United States.
In the early 1970s,
I lived in the riverine village of Ndoni
in Biafra,
and in the cities of Onitsha, Ibuzor, and Asaba
in Nigeria.
In the mid-1970s,
I lived in the cities of Monmouth, Independence,
and Corvallis in Oregon, United States.
And I lived in the nation’s capital
of Washington, in the District of Columbia,
United States.
In the late 1970s
and in the United States,
I lived in the cities of
Baltimore, Silver Spring,
and College Park, in Maryland.
In those years and places,
I gained the mathematical and scientific maturity
that I used to theorize
global circulation modeling
across
my hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
computers
that, in turn, is a primordial internet.

In the late nineteen seventies (1970s)
and early nineteen eighties (1980s),
I learned supercomputer techniques
that I used to solve
a large system algebraic equations.

My algebraic equations approximated
a system of coupled, nonlinear
partial differential equations
that governs the flow of fluids
below the surface of the Earth,
on the surface of the Earth,
or above the surface of the Earth.
My partial differential equations
govern the motions
of air and moisture
in climate models
that, in turn, is used to foresee
global warming.
My partial differential equations
govern the motions
of oil and gas
in petroleum reservoir models
that, in turn, is used to discover
and recover more oil and gas
from production oil fields.
As a research supercomputer scientist,
of the 1970s and 80s,
my quest was the fastest computation.
I achieved that when I discovered
how to theoretically re-formulate
the differential equations
on the blackboard
as algebraic equations
on the blackboard.
But the equations on my blackboard
were destined for my motherboard.
Then, I had to experimentally solve
those algebraic equations
as an equivalent sequence
of floating-point arithmetical operations
on the motherboard.
What made the news headlines
a rarity in computational mathematics
was that I discovered
how to solve those ridiculously large
floating-point arithmetical operations
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
It was a breakthrough because
all supercomputer textbooks
affirmed Amdahl’s Law
that decreed that
it will forever remain impossible
to program across eight motherboards.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s