Augmenting reality with geospatial information

January 16, 2019  - By 0 Comments

Geographic information systems and augmented reality are a part of our daily lives, so much so, we hardly notice them. GPS World columnist William Tewelow explores how these technologies will continue to change our lives.

Geographical information systems (GIS) and augmented reality (AR) have become a part of our daily lives, so much so that we hardly notice them. Those of us in the profession make our living by them; millions, soon billions more in the consumer world benefit from them without even realizing they are there.

The world is filled with data. Using AR, that data can be draped in front of us in a tapestry based upon our individual needs and interests. Applications multiply daily.  Many physical tools now in use will become virtual tools; workspaces, living spaces and the commutes between them (if they even exist at all) will change almost unrecognizably.

The world is poised to become an amazing and magical place.

Before we jump whole hog into the future — something that AR assuredly enables us to do — a glance back at the past can fill out our understanding of these great tools, GIS and AR — each great in and of its own, but virtually invincible when combined. Come with me down the corridors of history . . .

When Great Swords Clash

World War II was a fight against global domination — mankind’s greatest struggle for survival. Tyranny or freedom hung in the balance. The greatest minds raced to harness the powers of nature and science, plying them towards victory. This culminated in the invention of the ultimate weapon, The Great Sword, able to lay waste entire cities and ending the Second Great War in 1945, the year the world returned to peace. Freedom reclaimed the throne, euphoria spread — but the celebration was short-lived.

Kazakhstan. (Map: CIA archives)

Kazakhstan. (Map: CIA archives)

In the summer of 1949, the world split in half. In the United States, families gathered around the radio for comedy and drama before putting the children to bed, but on the other side of the world, deep in the center of a faraway, unknown land, on a cool Monday morning as the sun lazily rose over a barren terrain, a second blazing sun rose into the sky. The Soviet Union unsheathed and brandished its own Great Sword, making remote Kazakhstan the center of the world in that brief moment. The sound of the bomb was heard in Washington, D.C., and phones throughout the city rang into the night. Russian spies had stolen America’s atomic secrets. Nuclear annihilation was a reality. The Cold War had begun.

The threat of nuclear weapons in Soviet hands was too great a risk. The United States had to know the extent of the threat. Satellites did not yet exist. Airplanes had limited capabilities. The only way to know what was going on inside the Iron Curtain was intelligence assets on the ground, but the Soviets controlled the ground.

Play Your Aces High

Penetrating the skies over the Soviet Union became the top priority. In 1954 Operation AQUATONE began to build the first U-2 spy plane to fly at an altitude above the limits of enemy air defenses.

U-2 spy plane. (Photo: U.S. Air Force)

U-2 spy plane. (Photo: U.S. Air Force)

But Operation Aquatone was only half the challenge. In the vacuum-tube and wet-film era, building a camera small enough to fit on the U-2 and able to take pictures at the required resolution from so high an altitude was needed. These two efforts took place simultaneously on opposite sides of the country. Operation Aquatone took place in the Mojave Desert at what is now famously known as Area 51, and Operation HTAUTOMAT, the photogrammetry and photo-interpreters effort took place in Boston, Massachusetts and Washington, D.C. Both programs came together successfully in 1956 and the U-2 made its first reconnaissance flight over Eastern Europe.

Almost immediately, the demand for photo intelligence skyrocketed. In 1957 the Soviets launched Sputnik, the first manmade satellite to circle the Earth. Sputnik’s beeps could be understood in every language. Each of the beeps said, I am here above you no matter where on Earth you are, ultimately asking the question, What if I was a nuclear warhead? This elevated the need to surveil Khrushchev’s nuclear weapons capabilities. The Space Race had begun.

Five of a Kind Beats a Straight Flush

Satellite imagery from Discoverer XIV. (Photo: National Reconnaissance Office)

Satellite imagery from Discoverer XIV. (Photo: National Reconnaissance Office)

The U-2 flew unimpeded anywhere in the world for four years. But that ended in May 1960 when Captain Gary Powers, the U-2 pilot was shot down 300 miles east of Moscow. In August that same year the world sat transfixed watching the Soviet show trial of the captured U-2 pilot. President Eisenhower took full advantage of the diversion to launch the Discoverer XIV satellite, the first fully operational reconnaissance satellite under the CORONA program. A day later the satellite dropped its first payload, a 20-pound capsule of film. It was retrieved over the Pacific by a C-119 Flying Boxcar. It contained 1.6 million square miles of Soviet territory, providing more imagery than the entire U-2 program combined.

The Photo Interpreters Division (PID) was established to deal with the huge volume of imagery. It was renamed the National Photographic Interpretation Center (NPIC). NPIC used an ALWAC III computer, advanced for its time, but it ran on vacuum tubes and punch cards. It could calculate size and distance in imagery. Over 12 years, the CORONA program collected 2.1 million feet of film, but its processing could not keep pace with the flood of incoming imagery.

Development of the TX-2 computer in 1959 altered this picture, but two problems persisted. First, computers’ limitations prevented an analyst from working directly with imagery. Additionally, finding something noteworthy in an image was only half the problem; the other half was piecing together where on a map the feature belonged. Interior maps of the Soviet Union were vast, featureless, and not well developed.

Let Your Wild Horses Run

MIT graduate student Ivan Southerland solved the first problem, inventing a graphical user interface (GUI) on a TX-2 computer for his doctoral thesis, thereby revolutionizing computer graphics, computer-generated imagery (CGI), and computer-aided design (CAD). Southerland soon found himself heading the government’s Advanced Research Projects Agency (ARPA) to further develop the GUI. His innovations greatly advanced programs such as NPIC, allowing photo-interpreters to work directly with imagery displayed on a computer screen.

A visionary, Southerland saw computer-generated synthetic worlds merging man and computer; he created what became known as the Sword of Damocles, the first augmented-reality (AR) headset. It was so heavy it had to be suspended from the ceiling on cables in a big swindling contraption, hence its name. The Sword of Damocles evolved into the helmet-mounted display that military pilots use today, and became the foundation for development of Google Glass, Oculus Rift, Microsoft’s HoloLens and Meta.

Sword of Damocles headset. (Photo: MIT archives)

Sword of Damocles headset. (Photo: MIT archives)

Several years later, Southerland went to Harvard as an associate professor, continuing his work with computer graphics. During his tenure, a student working in Southerland’s computer graphics and spatial analysis lab saw the potential of combining CGI and CAD with his own knowledge of environmental science and landscape architecture. That student was Jack Dangermond, who created Esri in 1969.

Solitaire Takes Two

Thanks to Jack Dangermond and Ivan Southerland, GIS and AR are a part of our daily lives, so much so, we hardly notice them. They have changed how we watch sports. Long gone are the days of John Madden with an electronic pen scribbling out plays with great wit but terrible penmanship. Now, football shows a red scrimmage line on every play and the first down line in blue. We wonder why they have to take out the chains to measure the down because we can clearly see it on screen, but on the field they don’t have the luxury of AR.

Game highlights show a player encircled in a column of light for the commentator’s in-depth coverage. Live imagery projects the commentator into the image of the replay as if he or she is on the field in the midst of the action. Further back, advertisements appear on sideboards of the stadium stands, but only to television viewers. To those physically present at the game, the advertisements do not exist. You can observe this during an instant replay. Take notice of the sideboards during the game and then look at them during the replay. It is a blank, green board — same with baseball.

AR makes it easier to watch a hockey puck with a blurred red tail as it zips across the ice. In golf, a light green glow surrounds the ball on long drives enhancing our entertainment experience.

AR works by knowing where the observer is and where the observer is looking and integrating that information with line-of-sight data. Smartphones provide that capability, ushering in the age of personal AR apps. My personal favorite is FlightAware to track airplanes by aiming a phone’s viewfinder at the aircraft to know the altitude, speed and other information.

For identifying celestial objects, SkyMap helps find a planet, star or constellation. Real-world AR gaming is upon us, the most famous being PokemonGo. A more interesting game is Ingress, which uses real-world landmarks (featured in Nov 2017 article, Game-based learning improves training, engagement). MapBox has a location-based AR platform to support gaming.

Figments of Imagination

Museums consider AR the next frontier. Imagine putting on a pair of AR glasses and seeing things come alive. Stand on the Moon or Mars, or fly in the cockpit of an X-1B, the first supersonic aircraft. Go to an art museum and step into Van Gogh’s painting, Starry Night; the world around you becomes iridescent, globular, and thickly swirled in bold colors. (See Alex Mayhew’s exhibit, ReBlink at the Art Gallery of Ontario).

AR app for the American Museum of Natural History. (Photo: Smithsonian)

AR app for the American Museum of Natural History. (Photo: Smithsonian)

Walk through a park and statues become human, blink their eyes and speak to you. Dinosaurs, typically static monoliths, roar to life. It is no longer imagination. The Smithsonian’s National Museum of Natural History has an exhibit using your phone to do that very thing. It might seem as if AR is the future, but it is also revealing the past. Archaeology is using AR to see ancient cities as they once were. Those experiences enhance our learning, but what about more practical daily uses?

The world is filled with data. Using AR, that data can be draped in front of us in a tapestry based upon our individual needs and interests. That data can be passive, like location information such as place names appearing in the field of view as icons helping guide you where to go. No more looking down at a smartphone trying to figure out which way to walk. A light blue transparent dotted walking path will lie before you, leading to the icon above the door of the place you are going. Active AR, on the other hand, try to engage you, such as advertisements. A box will seemingly glitter and glow mesmerizing a person into buying it. Another will have tiny figures dancing on it enticing a customer. Look at a menu and the items will appear real for you to inspect before you order. The world is about to become an amazing and magical place.

AR view through a smartphone. (Photo: Apple)

AR view through a smartphone. (Photo: Apple)

How about workstations? They’ll be a thing of the past. No need for a monitor in the physical sense. It can be created as large as needed and placed anywhere as well a virtual keyboard. Interface directly and more naturally with the world around you.

Many of the physical tools now in use will become virtual tools, such as a measuring tape, a ruler, a laser level, a GPS receiver, and even pen and paper to some degree. They will just be apps in your smartglasses, call it AR-ware — mere programs, what we used to call figments of our imagination. Grab an AR-ware pen and paper and the handwriting appears perfectly normal but it is just digital text: save it, email it, or print it. Make up new tools or download tools as we do apps on our smartphones. Imagination will be the limiting factor.

Upload CAD blueprints and schematics into an AR generator and look around the house with x-ray vision and see inside or through walls and floors. A plumber can see pipes in the wall, their sizes and what they are made of. An electrician can see the wiring, frames, and pass-through holes. An insurance adjuster can look at damage, take notes in AR then pass everything along to the company who passes it on to the contractor.

An example of an INTUS AR model. (Image: INTUS)

An example of an INTUS AR model. (Image: INTUS)

Take that same scenario and scale it up to the size of a city. AR allows companies to see the vast network of utilities and assets hidden in the subsurface. The water company can know exactly where its water and sewer lines are located, as well as what other utilities are nearby? Contractors can see exactly where to dig, and just as importantly, where not to dig. INTUS Inc. is a leader in the rapidly growing field of subsurface assets using GIS and AR technology. INTUS’s CEO, Dimitris Agouridis, calls it “intelligent infrastructure.” He goes on to say the technology supports the Call Before You Dig law, and helps avoid costly mistakes that can destroy property, the environment and people’s lives. It saves time, money and resources, and reduces outages due to repairs that inconvenience residents. It also increases a city’s resiliency after a disaster.

The fascinating reality ahead of us is mere moments away measured in months and years. We will walk into museums and experience them in new ways. We will stand in an ancient place and see it reconstructed to its former glory from eons ago. We will work using smartglasses in ways we can only begin to imagine. Road crews will do precision repairs. One day, I will write this article, but not on a laptop, and instead sitting in a world part real, part virtual tied together by a perfect symmetry of place and time. A magical future awaits us created by merging GIS and AR.

My next column, coming in March, will go further into augmented reality and other emerging technologies that rely upon geographic information to build the next generation of intelligent infrastructure.


William Tewelow can be reached on LinkedIn.

William Tewelow, GISP

About the Author:

William Tewelow is a manager for the Federal Aviation Administration (FAA). He is a graduate of the FAA management fellowship program and a mentor with the FAA's National Mentor Program. While on special assignment to the U.S. Department of Transportation, Tewelow led the project to crowdsource the National Address Database for the White House Open Data Partnership. He is a geographic information systems professional (GISP) and a Maryland Scholar STEMnet speaker. He has a degree in geographic information technology and intelligence studies from American Military University and is currently enrolled earning a degree in Organizational Leadership. Tewelow retired from the U.S. Navy after serving 23 years as a geospatial and imagery intelligence specialist, a naval aviator, a meteorologist and a tactical oceanographer. He was among the first in the nation to earn a Geospatial Specialist Certification from the U.S. Department of Labor while working at NASA Stennis Space Center in Mississippi.

Post a Comment