By the time you read this, NASA’s New Horizons space probe will likely have just had its long-awaited ‘close encounter’ with Pluto, one of the most distant outposts of our Solar System. Its first pictures will have been splashed across the news, images that it has travelled an epic 4.8 billion kilometres in order to capture.
This was the first mission to a new planet – or, more accurately, a dwarf planet as Pluto is now classified – since Voyager 2 visited Neptune in 1989. The project is one of superlatives. It’s the furthest a spacecraft has ever flown to reach its primary target. It’s also the first mission to a double system, as Pluto and its moon Charon are called. Because the moon is over half the diameter of the dwarf planet itself, the two bodies actually rotate around each other.
Capturing photos billions of miles from earth, of an astronomical object that’s little more than a point of light in the sky, is an awe-inspiring achievement. In all probability, though, this visual feast will be just the tip of the iceberg. As scientists start to analyse the volumes of data that are being returned by New Horizons, there’s every expectation that we’ll learn so much more about this distant world than previous generations of astronomers could have dreamed of.
^ Work on New Horizons started in November 2001. Scientists from NASA, Johns Hopkins University and Southwest Research Institute are involved in the project
Our emphasis here isn’t primarily on astronomy, but on the technology that made all this possible. Here we explore the computer systems on board the spacecraft, designed to survive in one of the most hostile environments imaginable and soldier on if hardware faults threatened the success of the $700 million mission. We’ll also examine the imaging instruments – sophisticated digital cameras, if you like – that will furnish us with such amazing images, and we delve into the challenges of returning data from such vast distances.
Computer System
To say that, like all modern devices, New Horizons relies on computer systems to control all its functions may be stating the obvious, but this spacecraft isn’t exactly a modern device. New Horizons has taken nine years to reach Pluto, and it was on the drawing board for five years before it cleared the launch pad at Cape Canaveral in 2006. By computing standards that makes this incredible scientific achievement something of an antique.
At the time New Horizons was being designed, PCs were powered by 800MHz Pentium IIIs, but space scientists tend to be more conservative, choosing to go with something tried and tested instead of risking failure by opting for the latest high-performance chips. In addition, because radiation-hardened chips are required for operation onboard spacecraft, only processors that have been around long enough for a suitably engineered variant to be developed can be used.
^ The Mongoose-V is a radiation-hardened variant of the MIPS R3000 processor, most famously found in the original Sony PlayStation
In the case of New Horizons, the processor of choice was Synova’s Mongoose-V chip, a radiation-hardened version of the MIPS R3000 processor – a design first released in 1988 and clocked at just 12MHz. Despite this apparently lacklustre specification, though, Mongoose-V processors aren’t exactly cheap, costing from $23,000 each.
A MIPS-based processor may seem an odd choice in a world dominated by Intel and AMD chips on desktop and ARM cores in smartphones and tablets, but MIPS was one of several processor architectures launched in the 1980s and the early 1990s to power high-performance UNIX workstations and servers. Along with Sun’s SPARC, DEC’s Alpha, Power or PowerPC from IBM, and HP’s PA-RISC, these chips adhered to the RISC (Reduced Instruction Set Computer) design philosophy, based on the premise that a simple streamlined design can achieve lightning speed, despite the fact that it places greater demands on the software.
At the time, these chips were much faster than processors based on the Intel x86 architecture, which was a CISC (Complicated Instruction Set Computer) design. As the differences diminished over time, and Intel maintained a significant price advantage, a few of these architectures fell by the wayside, while others were repositioned. SPARC is still used in Fujitsu workstations and PowerPC can be found in some of IBM’s server and supercomputer lines, but it’s in embedded systems you’re most likely to find MIPS.
^ When it launched in 1994, the original Sony PlayStation was powered by a MIPS R3000 processor
Embedded systems are no less fundamental to our computing infrastructure than the big-name Intel Core processors that take the limelight. MIPS designs might no longer boast the highest headline performance figures, but they consume far less power than more mainstream chips, and this is an increasingly important consideration.
Despite this efficiency, modern designs offer a choice of 32- or 64-bit architecture, speeds of up to 2GHz, and as many as six cores per chip. MIPS processors are found in networking equipment, storage devices and printers, as well as smart TVs, set-top boxes, home automation kit, in-car systems, industrial control and automation equipment and wearables.
The first Sony PlayStation, launched in 1994, used the MIPS R3000 on which the New Horizons’ Mongoose-V chips are based. So the chip that took us to a myriad of alien worlds in games is now visiting real alien worlds out in space.
Doubling up
On top of this, New Horizons has two Integrated Electronics Modules (IEMs), each of which has two of these Mongoose-V processors – one for data handling and the other for guidance and control. Each IEM is equipped with a 64Gbit solid-state recorder, which can be thought of as a radiation-tolerant memory card or solid state disk.
The risk of radiation damage is just one of several threats which, if not properly managed, could have derailed New Horizons, wasting $700 million and shattering the dreams and expectations of its creators. We spoke to Chris Hersman, New Horizons mission systems engineer at Johns Hopkins University Applied Physics Laboratory, to learn how the computer systems were designed to survive a decade-long journey through space. He explained how a unique design philosophy addressed several of the most demanding challenges.
^ New Horizons was launched from Cape Canaveral in January 2006 aboard a Lockheed Martin Atlas V-551 rocket with a Boeing STAR-38 solid-propellant booster
“Keeping it low power is essential since the whole spacecraft has only 200W to do everything,” he said. “This requires custom-built circuits. For example, the solid-state data recorder is built in a way that only those chips being written to or read from are powered up.” There are other advantages of this approach, too, as Hersman went on to explain. “Other benefits that this provides is that the chips last longer and they are less vulnerable to radiation.”
But just what is the risk posed by radiation? Does it just cause circuits to malfunction or can it actually destroy them? “Both,” said Hersman, “unless properly managed.” Using memory as an example, he went on to describe how, even though New Horizons doesn’t suffer from as much radiation as spacecraft in Earth’s orbit or in the vicinity of Jupiter, the memory experiences an average of 10 ‘upsets’ per day. Because of this, the memory is designed both to detect and correct errors.
Perhaps the most ambitious means by which New Horizons is designed to withstand this most hazardous of environments, though, is by duplication. Instead of a single computer system there are two, and this philosophy of redundant circuitry applies to many of the other key electronic systems, too. Hersman described how, in most cases, only one of the circuits can be powered up at once, and the command to switch between one circuit and the other either comes from ground control or is generated onboard as a result of abnormal conditions such as a high temperature in one of the circuits.
Imaging Systems
Like most of NASA’s planetary probes, New Horizons carries a range of scientific instruments, seven of them to be precise, several with strange-sounding names such as PEPSSI, which is described by the equally mysterious-sounding phrase ‘energetic particle spectrometer’. The two instruments that have probably done most to create public interest in the mission are LORRI and Ralph. The former is described as a long focal length panchromatic charge-coupled device camera, while the latter is a multicolour imager/infrared imaging spectrometer. In plain English, they’re the digital cameras that were designed to bring Pluto and Charon to life. However, these instruments are markedly different from everyday digital cameras.
^ LORRI, the Long Range Reconnaissance Imager, is fitted into New Horizons
At first sight, New Horizons’ imaging instruments don’t look particularly special. With a resolution of 1,024x1,024 pixels, LORRI can be thought of as a one-megapixel camera, which compares unfavourably with the 16 megapixels provided by today’s cheap compact cameras. The high resolutions of modern cameras may be excessive for most uses (your monitor is only around two megapixels, after all), but even then we’re not comparing like with like.
This view is reinforced further when we look at Ralph. With its 5,024x32-pixel CCDs, one for each primary colour plus infrared, it has a resolution of just 0.16 megapixels. Even stranger, the image shape seems ridiculously long and thin, with an aspect ratio of 157:1. The fact is that, generally speaking, an exposure is not the same thing as a photograph. Instead, these imaging instruments often capture multiple exposures which are then stitched together as a mosaic to create a single high-resolution image.
To understand more about this very different approach to digital photography, we put some questions to Dr Paul Jerram, head of engineering sensors at e2v, the British company that supplied the CCDs for New Horizons’ imaging instruments. He started by explaining the rationale behind Ralph’s long, thin sensor.
“Ralph operates in a scanning mode,” he said, “in a similar way to the sensor in a document scanner. These scanners generate the image on a line-by-line basis as the document passes under the sensor. However, in the case of Ralph, the sensor has 32 lines which are added together as the spacecraft scans the ground, thereby increasing the sensitivity by a factor of 32. The advantage of this is that very large multicolour images can be built up from relatively small sensors.”
However, as Jerram went on to explain, the emphasis on small sensors isn’t only a matter of economics. “It is more that a scanning sensor is the most practical way to build up really large colour images with a high sensitivity array with large pixels. To produce a full-colour image with a staring array would need four sensors, each of which would be 5k x 5k (25 megapixels) or a single array of 100 megapixels. This would also greatly increase the size of the optics to an impractical degree.”
^ Preparing New Horizons for launch requires stringent 'clean room' conditions
At first sight, though, this still doesn’t tell the whole story since, even though 100 megapixels is admittedly a large number, it’s the equivalent only of a handful of cheap point-and-shoot cameras. But these are no ordinary CCDs, as Jerram pointed out.
“Space is a harsh environment. The main challenges are surviving the launch process and the adverse environmental conditions that are encountered on a journey, such as radiation. This means that a large part of the project involves environmental testing to show that the sensor performance will not be damaged by shock, vibration, temperature extremes and the x-rays and proton radiation that are encountered outside the Earth’s atmosphere. So, while there is only a single sensor on board the spacecraft, many tens of sensors are produced and destructively tested to ensure that it will work as required and when required,” he said.
Sense and Sensitivity
And there’s more. “Pluto is over 30 times as far from the sun as the earth”, Jerram said, “so the sun will appear 30 times smaller. This means there is 1,000 times less light. Also, New Horizons will go past Pluto at 30,000 miles per hour, so very high-sensitivity imagers are required.”
This is evident if we look at the size of the sensor. Commonly, the CCD in a cheap consumer camera measures 6.17x4.55mm, which works out at 1.75 square microns per pixel for a 16-megapixel camera. By way of contrast, the pixels in Ralph’s CCD are almost a hundred times larger.
Moving on from the CCD, the other element of a camera that has a major impact on image quality is the lens, and here the figures for the New Horizons’ instruments, and LORRI in particular, make interesting reading. LORRI has a field of view of 0.29 degrees, compared to around 40 degrees horizontally for a camera’s lens zoomed in to the point at which the scene appears normal. Dividing one by the other gives the amount by which the lens magnifies the image, so this would be a massive 138 times. This is far more than any regular camera zoom lenses, but that’s just a start.
^ After spending most of its voyage in hibernation, New Horizons is woken up for its Pluto encounter in December 2014. Radio signals take four and a half hours to reach Earth
A large CCD needs a large focal length lens to achieve the same magnification factor of a camera with a smaller CCD. Because of the need to work in low-light conditions, LORRI’s CCD is much larger than in most digital cameras, again pushing up the focal length. What’s more, as the focal length increases, so must its diameter if its light-gathering capacity is not going to be jeopardised.
The bottom line is that LORRI has a huge and vastly expensive 2,630mm focal length lens with an aperture of 208mm. While for simple lenses we might think of the aperture as one and the same as the diameter, LORRI’s lens requires the largest optical element to be 758mm in diameter and the whole assembly weighs 5.64kg. When we add in the rest of LORRI, we end up with something that weighs about as much as 60 compact digital cameras and we can’t begin to imagine how many times more expensive.
Communication System
When design work on New Horizons started in 2001, the fastest Wi-Fi available, 802.11a, topped out at a theoretical 54Mbit/s. However, because space probes and their ground-based Earth stations don’t have to adhere to standards-based communication protocols, it might seem reasonable to believe that much higher rates of data throughput would have been possible.
The International Space Station (ISS) has several communication links each offering 300Mbit/s. There’s an important difference between the ISS and New Horizons, though, that will be all too familiar to users of home Wi-Fi networks. As you get further from your access point, the speed decreases dramatically with the decreasing signal strength, which is why it doesn’t reach down the bottom of your garden. Even today’s top-end 802.11ac routers will see a halving of their throughput by increasing the range to just 10 metres.
New Horizons, on the other hand, is 480 billion times further away than your the bottom of your garden. It would therefore either need to have been fitted with some very powerful kit or data throughput will be seriously slow. It’s not surprising to learn that both are true.
^ New Horizons sees Pluto for the first time in January 2015. Still 200m km away, images aren't yet as good as those from the Hubble space telescope
X marks the spot
In common with many deep space missions, New Horizons uses the X band for radio communication as it offers a good compromise between atmospheric absorption and low noise. The frequency is about 8GHz, which isn’t too different from the 2.4GHz and 5GHz used in Wi-Fi equipment. There the similarity ends, though. The maximum permissible transmit power of Wi-Fi equipment in the UK is a tenth of a watt on 2.4GHz or one watt on 5GHz. In fact it’s often a bit less, because the regulations refer to the effective radiated power, which also takes the performance of the antenna into consideration. New Horizon’s transmitter, on the other hand, outputs 12 watts.
While a 12- or 120-fold improvement is not inconsiderable, it pales into insignificance when we compare the antennas. Typically, Wi-Fi equipment has small antennas that are more or less omni-directional so have a gain – that is, the amount by which the antenna magnifies the signal – of perhaps two decibels (dB). New Horizon’s antenna is a 2.1m dish with a gain of 40dB, and the antennas in NASA’s deep space network that are used to communicate with distant space probes are massive 70m dishes with a 74dB gain. Since both the transmit and the receive antenna have an impact on the strength of the received signal, this setup provides approximately 108dB more gain than the link between a Wi-Fi access point and a laptop.
^ Even with NASA's huge 70m antennas, the signal received from New Horizons was tiny
When you consider that the decibel is a logarithmic method of measurement, and each additional 3dB corresponds to a doubling in the power, you get some idea of how huge this difference really is. Taking the increased transmit power and the higher gain antennas into consideration, the link from Pluto to Earth uses the equivalent of a thousand billion times more power than a typical Wi-Fi link.
The reason such a huge power level is needed is because power decreases with the square of the distance. So, double the distance and the power drops to a quarter, at 10 times the distance it’s a hundredth, and so on. Going back to that Wi-Fi link at a range of 10m, and comparing it with the 4.8 billion kilometres to Pluto, we find that the signal strength would be 230,000 billion billion times less. Now that thousand billion times increase in effective power doesn’t look too excessive after all.
Bit part
The received power is so low that, during the Pluto encounter, New Horizons was designed to return data at a minimum speed of just 600 bits per second. Note that we’re talking of bits per second, not kilobits or megabits, so this is at least a million times slower than an 802.11ac Wi-Fi link. Returning a single image of Pluto to Earth takes several hours, and NASA is having to prioritise the remaining data for transmission, a process that will take months.
As well as the data rate being miniscule, the huge distance to Pluto affects another important characteristic of a communication channel: its latency. While the data transmission rate is sometimes expressed as speed, latency genuinely is the speed, in this case the time it takes for a radio signal to travel from Earth to Pluto. Even though radio travels at the speed of light – around 300,000 kilometres per second – signals from New Horizons are currently taking four-and-a-half hours to reach Earth.
Chris DeBoy, the New Horizons communications systems engineering lead at Johns Hopkins University, told us how such signals were still able to return useful information to mission controllers on Earth. For a start, more advanced coding schemes were used than those in, for example, the familiar Wi-Fi networks.
^ After almost 14 years in planning and in transit, New Horizons has reached its target and scientific exploration starts in earnest
“New Horizons and NASA’s Deep Space Network have taken advantage of developments in advanced coding techniques to maximize the downlink data rate,” he said. “New Horizons uses a rate 1/6 Turbo code for forward-error correction. Turbo codes can perfectly reconstruct the transmitted message at the receiver when the signal is seemingly buried in noise, and they approach the best theoretical code performance achievable.” That ‘best’ is defined by a complex piece of maths known as the Shannon Limit, and it’s not going to get any better any time soon.
DeBoy explained how, despite the sophistication of the coding scheme, the data throughput is much lower than most terrestrial systems, but there are ways of getting round this. In keeping with the philosophy of providing redundant circuits, the communications system has two transmitters and two Travelling Wave Tube Amplifiers (TWTAs) but, if all’s well, it’s possible to use them together.
“It’s possible to nearly double the downlink rate by powering both TWTAs, and we do it often,” he said. “A special signal splitter connects each radio transmitter to each TWTA, so if a TWTA were to have a problem, we could still use either radio. Each TWTA is connected to a specific input on the high-gain antenna. One TWTA feeds the port that transmits right-hand circular polarisation from the high-gain antenna, the other feeds the port that transmits left-hand circular polarisation. The polarisation of a radio frequency signal describes how the electromagnetic wave (specifically the electric field) behaves as it moves through space. Right- and left-hand circularly polarised signals are independent of each other and so it’s as if we have two separate channels for transmission.”
However, this isn’t always possible. “We can’t use this technique all the time, however, because the TWTAs are two of the more power-hungry components on the spacecraft. Sometimes there is not sufficient power to turn both TWTAs on during a downlink. But during the playback time after Pluto encounter, the dual-TWTA downlink will be routinely used to get the science data and pictures behind exciting new discoveries back to Earth as quickly and reliably as possible.”
^ In the future, New Horizons will head further into the Kuiper Belt, a ring of small objects the team hope to explore, subject to approval and funding
Sophisticated as New Horizon’s communication system may be, like the computer system, it relies on technology that may seem outdated. However, in this case, that isn’t because the probe has taken so long to reach its target. Instead, it’s testimony to the fact that the demands of space travel place different demands on equipment and, in some cases, that means that the old ways are sometimes still the best.
The technology in question is the Travelling Wave Tube Amplifier, or TWTA, we’ve already encountered. It serves the same purpose as the power transistors in most data communications equipment, namely to amplify a tiny signal to an adequate power level to be fed to the antenna for transmission. While the term TWTA might sound high-tech, it’s actually a rather specialised type of valve: the red-glowing glass tubes that adorned radios before the invention of the transistor in 1947 and the development of transistor radios in the 1950s and 1960s.
Onwards and Upwards
Its encounter with Pluto and Charon is New Horizons’ pinnacle of achievement, but it doesn’t represent the end of the road for the space probe. It now heads into the Kuiper Belt, a huge area of rocky material extending well beyond the orbit of Pluto. If funding allows, it will be tasked with searching for other dwarf planets, many of them as yet unknown.
This brings us to Dawn. This may not have captured the public imagination in the same way as New Horizons, primarily because few people other than astronomers have heard of its destination, but this spacecraft also visited a dwarf planet, called Ceres, this year. NASA’s Dawn probe went into orbit round the dwarf planet – the largest object in the asteroid belt between Mars and Jupiter – in March, returning images of its crater-potted surface and mysterious bright objects. It appears that 2015 truly is the year of the dwarf planet.
