BY RICHARD COMERFORD
Contributing Editor
I was recently given the opportunity to talk with Mark Pierpoint, Vice President and General Manager for Keysight's Internet Infrastructure Solutions team, about the impact of FPGAs on instrument design and what it means for instrument users. Mark’s group basically focuses on everything in the network infrastructure: between access interfaces into the network — typically wireless, but also wired — all the way into data centers.
Mark’s history eminently qualifies him to discuss this topic. A Keysight/Agilent/HP employee for 28 years, Mark’s background is RF and microwave engineering; he has a PhD in nonlinear design. Over the years, Mark has probably covered almost every job inside Keysight except for direct manufacturing. He has taught software programming in the past and has been following the advances in FPGAs intensely, with an eye to how Keysight uses them today in their products and how they envision using them in future products.
Mark Pierpoint, Vice President and General Manager for Keysight's Internet Infrastructure Solutions team.
Richard Comerford : Let's begin by talking about how you're using FPGAs to make improvements in a wide variety of instruments. Because FPGAs and digitizers are used throughout your designs, I wanted to touch particularly on modular, PXI, AXIE type implementation, and on the M9420A VXT. So, how are you using the FPGA in instrument designs?
M9420A VXT.
Mark Pierpoint: I don't think that there is any instrument that we ship today that does not have an FPGA in it. As you probably know, FPGAs really came to the fore when Xilinx was formed in 1984. I think right at the very outset of that, we were designing FPGAs into our products. Some of the earliest ones that used them included a product called the Frequency Agile Signal Simulator, from the HP Stanford Park Division in Palo Alto in the Bay Area and that was launched into the market in the late 1980s. All the way back then, I know it had in excess of 20 different FPGAs in it. Today, we've really expanded that out, and I can't think of any instrument that doesn't have at least one in some shape or form.
You're probably going to ask some questions around what functionality they provide, why they’re being used, and so forth. Very early on, FPGAs started out being programmable logic devices. They enabled us to remove basic logic gate functions and collect those together. That's one reason why we use FPGAs — they are a very efficient way of collecting all of the logic together that we need. That's for triggering, synchronization, real-time coordination of hardware, and so on.
The second key reason that really started to emerge then — in the mid '90s — was that as FPGAs got to be larger, people started to think about using them for digital signal processing and taking outputs from analog to digital converters or pre-generating digital waveforms to be converted to analog. That basically passed the tipping point around the early 2000s. At that time, one of the challenges was that the FPGAs were not powerful enough to do that at the speeds we wanted. That changed with the range of FPGAs that were available between Xilinx or Altera in the 2008 to 2010 time frame as they moved to the smaller semiconductor process points, certainly sub-14 nanometers; the power and the flexibility of those really took a step upwards.
Today, we still use FPGAs not just for DSP (although they still cannot compete with the very best in ASIC technology; they probably trail ASICs by between five and seven years, but obviously the investment to make an ASIC is much higher than it is to do an FPGA) but for another key area. They've done a really nice job integrating fast transceivers onto FPGAs. An FPGA is now at the heart of some of the signal-flow pipelines in our products and allows you to very quickly move data in and out of some fairly complex systems.
Comerford : In the architecture, do you segment functionalities into different FPGAs, or will one FPGA perform many of these functions?
Pierpoint : I think there is a definite separation and that's partly due to cost. Once you get into FPGAs that have high-speed transceivers on them, and you could be talking tens of gigabytes per second, then those devices are pretty expensive. They're thousands of dollars for a part. What you're really trading off there is cost versus the flexibility that an FPGA brings you. For logic and synchronization and triggering areas, you can typically use much lower-cost FPGAs. The obvious thing is that, generally, we separate that functionality. Obviously, there still needs to be synchronization and triggering into some of these signal path areas. A more complex FPGA will be able to do that, but where we only need synchronization, for example, then you can use a much lower-cost device.
Comerford : Once you've programmed the FPGAs, and you've added their functionality, I take it that functionality is key to the operation of the instrument; that is to say, changing it will dramatically change the operation of the instrument or possibly damage it.
Pierpoint : Yes, absolutely. Maybe it's worth taking a brief segue (and this might be a direction that you might want to talk to Altera or Xilinx about) because as the functionality of FPGAs has grown over the last decade, it's been pretty clear that they have the same challenges that the ASIC world has wrestled with for a long time, which is the gap between what is possible to do on an ASIC or an FPGA and what you can actually do with the software tools that are available. There is a so-called “design gap” that exists between the capability of the tools and the capability of the device itself as a raw device. What people typically find is that if you're pushing the envelope of what you can do in a particular FPGA, or you are filling it toward its capacity, it is incredibly difficult, if not impossible, to achieve some things, such as timing closure, for example, or the actual throughput or stability of the system that you're trying to put together.
We see a lot of our customers wrestling with this as well. Essentially, one of the biggest reasons for slippage in any R&D project today is this gap, particularly where you're trying to do multi-FPGA designs. You've got some processing capabilities split between FPGAs — a very challenging design problem. I'm aware of projects that basically doubled in length because they couldn't get the design to actually close and operate as planned, even though on paper, of course, the chip could do it.
I think if you look at some of the graphical tools that are in existence, there's a bifurcation today in terms of methodology used to program FPGAs. There are people who are trying to do this at the higher-level software approach, more like — I'm going to use the term in a very loose generic sense, but — programming at the C level and then going from C to gates. There have been various companies that have purported to be able to do this. Xilinx purchased a company two or three years ago to do something along those lines and they're really pushing that forward within their own tools. They're not the only ones. There are some open standards, like Open CL and others, that were originally created to program GPUs that can also be used to program FPGAs.
Again, I would say that none of those tools are really ready for prime time today and the result is that you can typically implement lower speed, but not higher-performance ones. Let me describe them as more synchronization-and-control-function-ready with those kinds of tools. If you really want to get into the signal path with the kinds of bandwidths and the performance that we typically aim for in our products, then you basically have to go down the other path for programming, which fundamentally gets you down to programming at the gate level, whether it be by some form of HDL or some level of coding at the hardware level.
The net result is where you were headed with your question — that having coded an FPGA and gotten it operating, if somebody goes in and changes a piece of that, then you can basically completely destroy the instrument functionality. It's possible that that could include physical damage, but it’s unlikely. It would certainly set it into a mode where it would not operate or give correct results, one of the things that I would say has led to us being very wary about opening up the ability for a user to customize the FPGA in an instrument. How could you do that in a way that gives them the flexibility that they need but avoids having to build the whole instrument and prove the operation of the instrument from the ground up?
Comerford : Yes. I see that. I infer from what you're saying that they're actually trying to make the instruments perform differently and, if you want to change the FPGA program, you have to be intimately familiar with the FPGA and be able to really program it at the gate level in order for it to function correctly.
Pierpoint : Typical types of things that get put into FPGAs — we've talked about synchronization, triggering, and so on. There's probably not a lot of IP that's related to that because what you're really trying to do is just align the operation of different pieces of a particular block diagram together.
Once you get into processing in the signal chain — you're taking a series of digital samples from an analog-to-digital converter — what kinds of things might you want to do with that? You might want to filter that signal, for example. You might want to do a digital down-conversation. You might want to do some re-sampling of that signal to get it to come out at a different sample rate. Those kinds of functions are implemented in FPGAs. We also have ASICs in which we implement them that typically run at higher speeds. You can build filters and set those things up. Those implementations are covered by a lot of the IP that we have, and those filters could be operating in real time to provide corrections on measurements. If you went in and did something and you weren't necessarily certain about what you were doing, you could certainly destroy all of the calibration on the instrument or turn it into something which would not make an accurate measurement.
Comerford : And you actually wouldn't know that you were taking inaccurate measurements.
Pierpoint : It could look like it was operating perfectly normally, yes.
However, for probably three years or so, we've had a way of allowing our customers, primarily OEM customers, to go in and embed their own measurement IP into our products. We have a number of digitizers — typically products like the U5303A, which is a PCI digitizer, or an M9703A or B, which is an AXIe digitizer, or an M9203A, which is a version of that in PXI that we've just launched — which are all programmable using one of our external software development kits. It's the U5340A, and as I mentioned, it's been around for about three years. There's some really good information on keysight.com and there's an article that's been recently published about this. Our concept is to use a standard, off-the-shelf design tool chain, something that people are very familiar with already, and then really provide what we internally call a sandbox (I think that's probably a reasonable industry term). What we do is provide an area of the FPGA to customize without impacting the operation of the digitizer. Do you want me to go further on this, Richard, or are you familiar with that?
Comerford : The sandbox idea was one that, I believe, comes from the idea of basic computer security, where basically what you do is isolate information from the rest of the computer system so that the handling of the information is done separately, and it can’t actually get into the computer system. That's what I understand it to be.
Pierpoint : Absolutely. Within our digitizers, for example, we isolate the interfaces to the various parts of the system. We isolate the system monitoring, the clock and reset managers, the calibration capability, and so on, but we try and provide as big a sandbox as possible, and then we enable access through these off-the-shelf tools. Essentially, what we've done is populated a series of templates into those tools that then allow the customer, using the tools that they're familiar with from programming at the FPGA level, to put in their own IP.
We have customers that, for example, might use our digitizers on a laser scanner for the [human] eye. They have their own algorithms that are very critical to them. They don't really want to share those, so they go in and populate the FPGA with those algorithms themselves. If you take high-energy physics, for example, there may be several different key algorithms that, for example, the physicists at CERN want to put in place to track neutrinos or execute some other experiment. So they would apply their particular types of filters into the system and off they go running full real-time capability inside the product.
To put that in context, what we're talking about is hardware-level programming that is targeted at the true FPGA engineers. This is not something that you typically go into thinking, “I’m going to knock out some code in 30 minutes or so.” It's a serious undertaking. But it works and clearly the way that we've done this, providing that sandbox has provided some very fast turnaround times, certainly a lot quicker than starting with a bare-board design.
Comerford : I take it that the digitizers are particularly designed with that in mind, with the idea that you will have this area — the sandbox — that can be used for user operations?
Pierpoint : Yes, absolutely.
Comerford : It has to be designed and constructed that way.
Pierpoint : Yes.
In Part 2 of this interview, Richard Comerford will continue the sandbox discussion and relate it back to the VXT.
Learn more about Keysight Technologies, Inc.