In 2015, Jane Cleland-Huang (center, standing), a professor of software engineering in the College of Computing and Digital Media (CDM), has received nearly $700,000 in grant money from the National Science Foundation. That’s just one indication of her status as an authority on traceability* in safety-critical systems.
“The software that controls a plane or train, or that determines the release of insulin in a pump, or that delivers radiation therapy, has to work reliably,” she says. “Traceability shows that every part of a complex system is working as it should.”
Software engineering for safety-critical systems begins with requirements specifying what the software must do: What functionality is required? Then, comes hazard analysis: What could go wrong? Traceability demonstrates that these hazards have been fully addressed in the delivered system. “It’s not enough to say the software works,” says Cleland-Huang. “Users must also be assured that hazards won’t happen.”
Traceability shows the connections or links between every artifact of a system—from stakeholder requirements and design specifications to source code. A chain of trace links shows which part of a system is potentially affected by a hazard and which part mitigates the hazard. Not surprisingly, traceability gets very complex, very fast: A single view of a system might have more than 300,000 links.
The goal of Cleland-Huang’s research is to ensure that trace links in safety-critical systems are correct, complete and consistent, and that they remain in compliance even as the software changes over time. Her current team includes Alexander Rasin (standing), an assistant professor and database expert, three PhD students—Jin Guo (left, seated), Mona Rahimi (right, seated) and Sugandha Lohar—and seven undergraduate and grad student summer interns.
“We’re looking at ways we can use artificial intelligence and machine learning to automate the process of creating and maintaining trace links, and we’re searching for new ways to access and use trace data to give users powerful analytical capabilities,” she says. Building an Intelligent Expert System
Guo is applying the technologies of artificial intelligence (AI) to improve the traceability process. “If the computer can understand the information embedded in the software artifacts, it could connect the artifacts across the system, and that would be a significant improvement in efficiency,” she says. “If nothing else, our solutions could enable the computer to retrieve information faster than a person can. Also, since humans make mistakes, AI should increase accuracy.”
Behind this quest is a bigger challenge, as Guo elaborates: “I’m exploring ways that knowledge can be represented: How can we enable a machine to ‘know’ a domain and to compare information across artifacts? That’s really interesting, and it would have a lot of applications. AI and software engineering make sense together.”Evolving Trace Links Automatically
Rahimi is giving the computer the ability to detect changes to the software as they happen, define each change’s impact on other parts of the system, alert the user about possible consequences, and ask the user to approve or reject automatic updates to the trace links.
“One of the key necessities in traceability is verification: As software changes over time, trace links can fall out of compliance,” she says. “We’re creating a ‘recommender system’ that lets the user decide whether an update is called for. This would improve system precision, which is really important because problems multiply as the distance between versions grows. In safety critical systems, any failure can cause huge problems. That’s why it’s best if updates to trace links happen in real time, as the software is changed, rather than after-the-fact.”Using Natural Language to Query a Database
The data in trace links could be useful above and beyond safety verification. For example, it could be used to predict which parts of a system might be likely to fail in the future, or it could be used to calculate the risks, costs and effort of changing any part of a system. The greatest barrier to that kind of analysis is the difficulty in formulating queries for data retrieval.
That’s where TIQI comes it. It’s a natural language solution being built by a large team of researchers including Rasin and Lohar. Like its commercial cousin Siri, TIQI would transform a spoken inquiry into a computer language, in this case SQL (Structured Query Language), the common standard for relational database management systems.
“Right now, we’re teaching the system how to answer simple questions, such as ‘How many test cases have failed in the past month?’ More sophisticated analytics—such as ‘Is my project safe?’—will be far more challenging,” says Lohar. “There are two reasons for that: one, traceability data is spread across hundreds of artifacts, in multiple formats; two, natural language is ambiguous. What exactly is the speaker’s intended meaning? The computer has to be able to ask the user when it doesn’t understand a word or phrase.”
Rasin elaborates: “TIQI would make the computer understand a spoken language, including domain-specific terms. That’s hard enough, but then speed has to be part of the picture. After asking a question, no one wants to wait an hour for the answer. How do we make the computation happen in seconds? And if there’s ambiguity, TIQI will have to ask for very specific clarification because users don’t want to start from scratch every time. We’re wanting to do much more than simple searches, and that’s a fascinating challenge.”
"We’re tackling a daunting challenge, and we’re breaking barriers,” says Cleland-Huang. “But we have a few tricks up our sleeves. So, we’re having fun.”
*Wikipedia defines traceability as the ability to verify the history, location or application of an item by means of documented recorded identification. The dictionary definition has a bit more poetry: A trace is a footprint.