The Toughest Problem in Calculus

Lecture delivered by Philip Emeagwali

The Toughest Problem in Calculus

TRANSCRIPT
In 1989, it made the news headlines
that an “African Computer Wizard”
discovered
how to supercompute
at the fastest computer speeds.
It was rare because
one supercomputer
or the world’s fastest computer
costs more than the budget
of a small nation.
I used my supercomputer
to solve the toughest problem
in calculus.
At the granite core
of my mathematical grand challenge
was the system of non-linear
partial differential equations
of infinitesimal calculus
that was impossible to solve.
Those equations cannot be solved
when formulated
to help foresee global warming
or to recover more oil and gas.
By definition,
a system of partial differential equations
cannot be solved directly
on a computer.
The reason is that
the word “differential”
arose from the term “differentialis”
which translates to “taking apart”
or “taking differences.”
For the partial differential equations
of infinitesimal calculus,
such differences are infinitely small.
That is, they yield infinite calculations
that take forever
to completely compute.

A grand challenge problem
that takes foreover to solve
is impossible to solve.
A grand challenge equation
formulated at infinite points in calculus
cannot be solved on a computer,
unless it is reformulated
at finite points in algebra.
That reformulation is necessary
to make the impossible
possible.
If I had taken infinitesimally small differences
the forever impossible
will take forever
to solve, even across
a global network of computers
as large as planet Earth.
To make the impossible
possible,
I had to discretize my continuous
space and time and functions.
I had to use their finite differences.
Finally, I had to use
the finite difference approximations
that I invented
to approximate
the nine partial differential equations
that I invented,
as well as approximate
the other partial differential equations
and equations of state
that were invented
about a century earlier.
I used those finite difference equations
to formulate
my algebraic approximations,
which was the
largest system of algebraic equations
ever solved
on and across
computers.
The computer wizardry was in
consistently telling and retelling
the same story across boards.
I told the story of the motion
of fluids flowing below
or on
or above
the Earth.
I told that story from the storyboard
to the blackboard
to the motherboard
and continuing to the boardroom
to the classroom
to the newsroom
to the living room.
I translated calculus to algebra
to obtain algebraic approximations
that arose from
the finite difference analogue.
On my motherboard
was the analogue
of a partial differential equation
that originated on the blackboard
that was the codification
of a law of physics
that originated in my storyboard.

My Contributions to Calculus

The phrase
“partial differential equations”
was first used in 1845.
I, Philip Emeagwali,
first came across it
in June 1970 in Onitsha, Nigeria,
in my 568-page
blue hardbound textbook titled:
“An Introduction to the Infinitesimal Calculus.”
It was subtitled:
“With Applications to Mechanics
and Physics.”
That calculus book was written by
G.W. [George William] Caunt
and published by
Oxford University Press.
A decade earlier,
I began learning the times table
in January 1960
as a five-year-old
at Saint Patrick’s Primary School, Sapele,
in the British West African colony
of Nigeria.
The partial differential equation
of calculus
is not congenial to the fifth grader.
It takes ten years
for a five-year-old
to gain the mathematical maturity
needed to learn calculus.
Because partial differential equations
are the most advanced expressions
in calculus
it will take ten years of training
for that 15-year-old
research mathematician-in-training
to gain the mathematical maturity
needed to discretize a system of
coupled, nonlinear
partial differential equations.
That term “discretize”
is the mathematical lingo
for approximating
a differential equation
defined at infinite points
with corresponding algebraic equations
defined at finite points
that converges to it.
The partial differential equations
that describe the motions of fluids
must be formulated
from the laws of physics.
They must be formulated
from the storyboard
to the blackboard.
But the partial differential equations
used to foresee global warming
or to recover oil and gas
can only be formulated on the blackboard.
The partial differential equations
used to model global warming
can be formulated exactly on the blackboard.
They cannot be solved
on the blackboard.

As a black research mathematician
in the United States,
it was the toughest mathematical problem
that I ever solved.
My quest for its solution
reduced me to a lone wolf
computational mathematician
that discovered
as a consequence of my monastic interiority.
I was shackled for sixteen years
to two-to-power sixteen
computers.
Each of my 64 binary thousand computers was like a black box
in a dark room,
or a dark sixteen-dimensional universe.
I visualized my ensemble
as a primordial internet
in a sixteen dimensional universe
that were woven together
as one seamless, cohesive whole supercomputer.
I visualized
a one-to-one correspondence
between my 64 binary thousand computers
and the as many vertices
of a cube
that is tightly circumscribed
by a sphere
in a sixteen dimensional universe.
I discovered how to formulate
the partial differential equations
used to discover and recover oil and gas
exactly and correctly
on the blackboard.
They can only be solved
approximately
on one motherboard
which, in turn, earned it my description
as the toughest problem
in calculus.
It was the mathematical equivalent
of pushing the rock
up Mount Kilimanjaro.
In my dreams,
was the recurring theme
in which I visualized
solving primitive
systems of coupled, nonlinear
partial differential equations
that exploded
from 62-mile deep clouds
that enshrouded
a seven thousand
nine hundred and twenty-six (7,926) mile diameter globe
that was my mathematical metaphor
for planet Earth.
I discovered that
an initial-boundary value problem
in calculus,
defined as partial differential equations
with initial and boundary conditions,
can be solved accurately
across
a hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I theorized that
those motherboards
must be uniformly and equidistantly
distributed
across the hypersurface
of a hyper-globe.
I discovered that
a system of coupled, nonlinear
partial differential equations
of a well-posed initial-boundary value
grand challenge problem
could be solved accurately
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I discovered how to solve them
as an equivalent
sixty-five thousand
five hundred and thirty-six (65,536)
challenging problems,
or sixty-five thousand
five hundred and thirty-six (65,536)
initial-boundary value problems.
They called me “Calculus”
because I began studying calculus
in June 1970
in Onitsha, Nigeria.
It took me twenty years
beyond the 568-page
blue hardbound book
“An Introduction to the
Infinitesimal Calculus”
to gain the mathematical maturity
that I needed to solve
an initial-boundary value problem.
I had to solve that calculus problem
by first theoretically formulating them
across
sixty-five thousand
five hundred and thirty-six (65,536)
blackboards.
Then, I experimentally solved
my sixty-five thousand
five hundred and thirty-six (65,536)
initial-boundary value problems
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
My first ten years, or the 1970s,
was on formulating
partial differential equations
on the blackboard.
And my second ten years, or the 1980s,
was on solving
large systems of algebraic equations
that approximated
a system of coupled, non-linear
partial differential equations
on the motherboard.
First, I discovered
how to theorize
the computation-intensive
algebraic approximations
of a grand challenge
initial-boundary value problem
as
sixty-five thousand
five hundred and thirty-six (65,536)
challenging problems.
I theorized those problems
to have a one-to-one correspondence
to sixty-five thousand
five hundred and thirty-six (65,536)
blackboards.
Then, I discovered
how to experimentally
solve those sixty-five thousand
five hundred and thirty-six (65,536)
problems.
I discovered
how to solve them
across sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
I discovered
how to speedup 180 years,
or sixty-five thousand
five hundred and thirty-six (65,536) days,
of computation on only one computer.
I speeded it up
to just one day of super-computation
across a primordial internet
that is a hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
computers.
As a lone wolf
and the first programmer,
I had to be a jack-of-all-computer-sciences
as well as the primordial wizard
that programmed all those
sixty-five thousand
five hundred and thirty-six (65,536)
computers.

The most important partial differential equations
are those that encode
the motions of fluids,
as dependent variables.
My partial differential equations
are my sixteenth sense
of communicating with the spirit world
to foresee never before seen motions.
Oil, water, and gas
are fluids in motion.
To recover oil and gas
requires we set them in motion
from the water injection wells
to the oil and gas production wells.
Rivers, lakes, and oceans
are fluids in motion
across the surface of the Earth.
The air and the moisture
that enshroud the Earth
are 62-mile deep ocean of fluids
in circulatory motion
across a globe
that has a diameter of
seven thousand
nine hundred and twenty-six
(7,926) miles.

I began my journey
to the frontiers of the
partial differential equations
of calculus
and beyond the fastest computers.
I began that journey
in June 1970
in Christ the King College,
Onitsha, East-Central State, Nigeria.
At Christ the King College,
they called me “Calculus,”
not “Philip Emeagwali.”
I was called “Calculus”
because I was pre-occupied
with the book titled
“An Introduction to the Infinitesimal Calculus”
while Mr. Aniga, our math teacher,
was teaching algebra.
I first learned the expression
“partial differential equations”
from that calculus book.
I continued on March 23, 1974
from Onitsha, Nigeria
to Monmouth, Oregon,
in the Pacific Northwest Region
of the United States.
In the early 1970s,
I lived in the riverine village of Ndoni
in Biafra,
and in the cities of Onitsha, Ibuzor, and Asaba
in Nigeria.
In the mid-1970s,
I lived in the cities of Monmouth, Independence,
and Corvallis in Oregon, United States.
And I lived in the nation’s capital
of Washington, in the District of Columbia,
United States.
In the late 1970s
and in the United States,
I lived in the cities of
Baltimore, Silver Spring,
and College Park, in Maryland.
In those years and places,
I gained the mathematical and scientific maturity
that I used to theorize
global circulation modeling
across
my hyper-global network of
sixty-five thousand
five hundred and thirty-six (65,536)
computers
that, in turn, is a primordial internet.

In the late nineteen seventies (1970s)
and early nineteen eighties (1980s),
I learned supercomputer techniques
that I used to solve
a large system algebraic equations.

My algebraic equations approximated
a system of coupled, nonlinear
partial differential equations
that governs the flow of fluids
below the surface of the Earth,
on the surface of the Earth,
or above the surface of the Earth.
My partial differential equations
govern the motions
of air and moisture
in climate models
that, in turn, is used to foresee
global warming.
My partial differential equations
govern the motions
of oil and gas
in petroleum reservoir models
that, in turn, is used to discover
and recover more oil and gas
from production oil fields.
As a research supercomputer scientist,
of the 1970s and 80s,
my quest was the fastest computation.
I achieved that when I discovered
how to theoretically re-formulate
the differential equations
on the blackboard
as algebraic equations
on the blackboard.
But the equations on my blackboard
were destined for my motherboard.
Then, I had to experimentally solve
those algebraic equations
as an equivalent sequence
of floating-point arithmetical operations
on the motherboard.
What made the news headlines
a rarity in computational mathematics
was that I discovered
how to solve those ridiculously large
floating-point arithmetical operations
across
sixty-five thousand
five hundred and thirty-six (65,536)
motherboards.
It was a breakthrough because
all supercomputer textbooks
affirmed Amdahl’s Law
that decreed that
it will forever remain impossible
to program across eight motherboards.

The Day the Computer Died

Measurable Contributions
to the Computer

The word “computer”
was coined seven centuries ago.
For the past seven decades,
the computer was defined as an electronic machine
that performs fast computations.
And the three grand challenge questions
in supercomputing were these:
“First, how can we manufacture
faster computers?
Second, could eight computers
be used to increase the speed of computations
by a factor of eight?
Third, could it be possible
to use, say, 65,536 computers
to increase the speed of
computations
by a factor of 65,536?”
In the nineteen fifties and sixties and seventies
(1950s and 60s and 70s),
the debate at computer conferences were on how to increase the speed
of the supercomputer
and most, importantly, use the technology to solve the grand challenges of computational physics.

The turning point of this debate
occurred at a computer conference
that took place in Silicon Valley
in April 1967.
At the conference,
one of the leading minds in supercomputing,
named Gene Amdahl,
presented Amdahl’s Law that,
in effect, said that it will be impossible to use eight processors
to increase the speed
of a supercomputer by a factor of eight.
To obey Amdahl’s Law,
Seymour Cray,
who designed seven in ten supercomputers of the 1980s,
only used one vector processor
to power all supercomputers.
In April 1967,
I was a twelve-year-old
living in a refugee camp
in war-torn Biafra
in the West African nation of Nigeria.
Fast forward seven years
from Onitsha, Biafra (Nigeria),
I was programming computers
in Monmouth (Oregon)
in the Pacific Northwest region
of the United States.
Fast forward a quarter of century
from Biafra (Nigeria),
my name, Philip Emeagwali,
came up at a computer conference
in Silicon Valley.
It came up because I discovered
how to use an internet
that’s a global network of
65,536 computers.
And use that internet
to solve grand challenge problems.
And solve those problems
with a speed increase of 65,536.

The contribution to the development
of the computer—that is measured, quantified, and unambiguous—
is the discovery of how to perform
the fastest computations.
There’re misconceptions
and misunderstandings
about tangible contributions
to the development of the computer,
or to the development of the internet
that is a global network of computers.
An intelligence quotient, or IQ,
of one hundred and ninety (190)
is not a contribution to the development of the internet.
Passing a test in computer science
is not a contribution
to the development of the computer. Teaching computer science
is not a contribution
to the development of the computer.
A certificate in computer science
is not a contribution
to the development of the computer.
A certificate is not a contribution because
the knowledge certified by a certificate was once unknown
until it’s inventor contributed it
to the development of the computer.
Being crowned the best computer wizard in the world
is not a contribution
to the development of the computer.
Manufacturing or assembling or selling computers
is not a contribution
to the development of the computer.
However, since the computer
is a machine that performs
fast computation,
discovering faster computers
is a contribution
to the development of the computer.
The thirst to know more about
what makes supercomputers super remain unquenched if the supercomputer was not invented,
in the first place.
The End of Amdahl’s Law
The most important contribution
to the development of the computer
is the discovery of how to speed up computations
and do so across an internet
that’s a global network of computers.
According to Amdahl’s Law
as published in April 1967
by supercomputer pioneer
Gene Amdahl,
it would be impossible to divide
a grand challenge problem in physics
into eight problems
and use eight processors, or computers,
to solve it eight times faster.
The meaning
and the context of a law of physics changes
as the body of knowledge of physics changes.
For example, the algebraic restatement of the Second Law of Motion of physics, Force equals mass times acceleration, was not written in Newton’s Principia.
And the calculus restatement
of that algebraic restatement
of the Second Law of Motion
was not discovered
until mid-19th century.
Similarly, the meaning
and the context of a supercomputer law changes
as the supercomputer technology changes.
Because the meaning of laws and formulas changes with time,
Amdahl’s Law
changed into Amdahl’s formula.
The formula was not even presented
by Gene Amdahl in his April 1967 conference paper.
Again, I use the term Amdahl’s Law
as Gene Amdahl described and introduced it
at a computer conference
in Silicon Valley in April 1967.
The supercomputer textbooks
and the supercomputer scientists
of the nineteen eighties (1980s)
invoked Amdahl’s Law
and Amdahl’s formula
to argue that I, Philip Emeagwali,
could not have succeeded
in programming an internet
that’s a global network of
65,536 computers.
And programming that internet
to solve
a grand challenge problem.
And solve the problem
with a speed increase of
65,536.
My discovery was rejected
in the nineteen eighties (1980s).
It was rejected because I was expected
to provide evidence
that Amdahl’s Law is true.
I was not expected to disprove
Amdahl’s Law.
I tried to make the impossible
possible.

In November 1982,
I gave a lecture
at a scientific conference
that took place near The White House
in Washington, D.C.
In 1982, I was an unknown scientist.
And my lecture drew a grand total of one person.
Yet, nine years later, on July 9, 1991.
I gave a similar lecture
on the same scientific discovery.
In 1991, I was invited to speak
because I had a high name recognition amongst mathematicians.
I gave my lecture at the largest mathematics conference, ever.
It was called the International Congress for Industrial and Applied Mathematics.
That conference took place
a short train ride
from The White House,
in Washington, D.C.
In 1991, I was well known to mathematicians.
Because I was well known,
the lecture auditorium
was so packed that I,
their invited speaker, shoved my way to the speaker’s podium.
It was packed because I discovered
a paradigm shift.
I discovered the shift
at the crossroads between mathematics, physics, and computing.
When my lecture ended,
the standing room only auditorium
gave me a standing ovation.
I received their ovation
because I contributed
to the development
of the computer.
I contributed to supercomputing
by theoretically and experimentally discovering
the falsification of Amdahl’s Law
as described in supercomputer textbooks of the 1980s.
In 1989, I experimentally discovered how to record an actual, not theorized, speedup of 65,536.

The supercomputer died in 1989.

The Fastest Computer I Discovered
In nineteen eighty-nine (1989),
it made the news headlines that I,
Philip Emeagwali,
an African Computer Wizard
in the United States,
discovered how to perform
the fastest computations.
The background story
that was not in the cover stories
was that I theorized about
a global network of 65,536 computers.
I experimented with
65,536 processors.
I programmed and discovered
65,536 initial-boundary value
grand challenge problems
could be formulated to be parallel
to an internet
that’s a global network of
65,536 computers.

I tried to keep a long story short.
It took my fifty years
to acquire the knowledge
and the wisdom
that I’m sharing in fifty minutes.
I did not discover an internet
in 50 minutes
and I don’t expect you to understand my discovery in 50 minutes.
It’s impossible
to use only 50 minutes
to convey an infinitude of knowledge.
I accumulated my knowledge
across 50 years.
For half a century,
I learned algebraic equations
and I discovered
partial differential equations
of calculus.
I invented algorithms
for floating-point arithmetical operations within computers
and for email communication primitives across an internet
that’s a global network of 65,536 computers.
And I wrote computer
and internet codes.
My computer code
made the news headlines
as recording the fastest speeds.
My internet code
made the news headlines
as recording the highest speedups.
It will be impossible for you
to understand in fifty-minutes
what took me fifty years
to understand.

Let me give you a partial timeline
of my research in 1979.
I will focus on the early evolution
of initial-boundary value
grand challenge problems.
My timeline began in
eighteen seventy-one (1871)
with the French mathematician
Barre de Saint-Venant
who developed
the partial differential equation
that describes the motion
of water through a river.
My timeline continued
in eighteen eighty-nine (1889)
with the French mathematician
Junius Massau
who discovered
that the partial differential equations
of Barre de Saint-Venant
could be solved by graphical integration.
My timeline continued
in a 70-page engineering bulletin
that was titled
“The Hydraulics of Flood Movements
in Rivers.”
That bulletin was published
in nineteen thirty-four (1934)
by the American H.A.
[Harold Allen] Thomas
who theorized a four-point
finite difference
algebraic approximation
of the partial differential equations
of Barre de Saint-Venant.
My timeline continued
with the American J.J. [James Johnston] Stoker who, in 1957,
used computers to model water waves along Ohio river.
And my timeline continued
with the Nigerian, myself, Philip Emeagwali, who
in nineteen seventy-nine (1979),
wrote computer programs
that solved
the partial differential equations
of Barre de Saint-Venant.
By 1980, my research interest
had grown from modeling waves
across rivers
to modeling waves
across oceans.
I was introduced to ocean waves
during a series of lengthy conversations
that I had
with a Canadian mathematician
named James Elmer Feir.
He came to Washington, D.C.
from Cambridge University, England.
In 1967, James Feir co-discovered
killer waves described
by the phenomenon known as
the Benjamin-Feir instability.
James Feir introduced the terminology “Benjamin-Feir Waves,”
into the lexicon of fluid mechanics.
My interest was in
computation-intensive problems
in physics
that must be solved across
an internet
that’s a global network of
65,536 computers.
By 1981, I had moved on
from using computers to model
oceanic waves
to using computers to model atmospheric waves.
I learned about atmospheric waves
during my five-year-long daily visits
to the Gramax Building
in Silver Spring, Maryland.
The Gramax Building was
the headquarters
of the United States
National Weather Service.
It was a short walk
from my residence.

 

At the National Weather Service,
I learned that weather forecasting
is one of the computation-intensive grand challenges of physics.

Those geophysics grand challenge problems
took me from hydrology
to oceanography to meteorology
and to geology.
Their common thread
was the fluids that flow
above the surface of the Earth,
on the surface of the Earth,
and below the surface of the Earth.
Their common thread
was the set of laws of physics
that govern motions of the fluids.
Their common thread
was the system of coupled, non-linear partial differential equations
that I reduced
to algebraic approximations
that I reduced to a set of
computation-intensive
floating-arithmetical operations.
The computer died in 1989,
the year I discovered,
how to paradigm shift
and email 65,536 problems
and do so across sixteen times
two-to-power sixteen email wires
that defined and outlined an internet
that’s a global network of
two-to-power sixteen computers.
It took me half a century to discover how these problems could be solved.
It will also take you half a century
to understand how I solved them.

I discovered
how to solve 65,536 problems
and solve them with a one-to-one computer-to-problem correspondence. That discovery enabled me
to discover a speed increase
of a factor of 65,536.
I discovered
how to speedup 65,536 days,
or 180 years,
of computing within one computer
to only one day of computing
across an internet
that’s a global network of
65,536 computers.
I discovered 180 years in one day.
I discovered
how to make grand challenge problems of physics
parallel to an internet
that’s a global network of computers.
My discovery
was described in the June 20, 1990 issue of the Wall Street Journal
as a paradigm shift
that changed the way we look at supercomputers.
In the old paradigm of computing,
the fastest computations were achieved on only a singular computer.
In my new paradigm of computing
across an internet,
I recorded previously unrecorded
speed-ups and speeds
in both email communication
across my primordial internet
and in total arithmetical computations across my computers.
I visualized my computers
to be distributed equal distances apart and across an internet
that I visualized
as metaphorically encircling a globe.
My discovery
did not make the news headlines
when I theorized it.
It made the news headlines
when I experimentally re-confirmed it, ten years later.

Not long ago, I walked inside a public library where I overhead a boy
who was about 12 years old
ask a librarian.
“Is Philip Emeagwali still living?”
I turned towards them and replied:
“Please allow me to introduce myself.
I’m Philip Emeagwali,”
We’ve changed the way we look at supercomputers.
The old supercomputer died.
The discoverer of the new supercomputer is alive.
The supercomputer,
that’s the computer of tomorrow,
died in 1989.
The computer died in 1989.
The computer of tomorrow
will be born as an internet.”