Today, optical networks use lasers to transmit messages. By comparison, future quantum optical networks will use much shorter bursts, with entangled photons carrying messages. Quantum networking promises such advantages as improved message accuracy and efficiency, and unprecedented security and computation capabilities. However, designing a quantum network faces challenges that differ from that of today’s optical networks. It requires experimentation with quantum protocols, models, and devices, and collaboration with multiple agencies.
NIST researchers have developed the software for a management system, called “Multiverse,” which will automate experiments on a quantum network testbed and will be deployed across multiple sites. NIST researchers, collaborating with colleagues at the Naval Research Laboratory, described this system in Multiverse: Quantum Optical Network Management, which was presented at the GOMACTech 2023 Conference.
The Multiverse system will manage experiments on the testbed and enable a realistic environment for a quantum network. The testbed includes quantum devices – such as photon sources and detectors – with varying capabilities. An optical layer connects these devices using predefined fiber paths and those provided on demand. These paths allow quantum devices to send and receive photons. Paths are selected based on such test criteria as path length, fiber loss, source power, detector efficiency, and resource availability.
The Multiverse system and testbed will be used to conduct three types of experiments:
Assessing the synchronization of the sourcing of photons, their entanglement, and detection
Measuring the entangled photons’ matching quantum state, specifically their matching polarization
Ensuring the light signal’s stability and purity needed for desired polarization of entangled photons
Researchers are considering quantum network design decisions that require test equipment that is not yet available or difficult to use. They are also developing simulations for evaluating advanced aspects of a quantum network.
In late April 2023, NIST’s Smart Connected Systems Division Chief, Abdella Battou, described the connectivity needed for the quantum network testbed to the Advanced Communications Technologies Working Group, which is part of the Federal government’s Interagency Committee on Standards Policy. The testbed is being pursued by the Washington Metropolitan Quantum Network Research Consortium and will enable experiments needed to develop quantum optical networks, offering significant advantages over today’s classic optical networks.
The testbed has multiple remote sites, including a NIST site, and connects quantum devices, using an optical layer, consisting of multiple fiber paths. The Metaverse management system will select and control these paths, based on the types of experiments conducted.
First and foremost, the testbed needs sufficient connectivity to enable two separate sources to entangle photons. These entangled photons are the basic means of communicating via quantum optical networks. This connectivity also must allow detectors to receive and measure the correlated behavior of these entangled photons. The testbed also requires connectivity that will enable:
Two separate experimenters to exchange either quantum or classical optical information
Two separate experimenters to exchange secure quantum and classical communications
The Metaverse management system and testbed devices and nodes to interact with each other
The optical layer to characterize noise and losses in fiber paths
Multiplexing – sending multiple streams of information via a fiber path at the same time
Sending datagrams – self-contained messages – via the testbed network
Switching fiber paths
Addressing latency limitations
Stabilizing the quantum layer for certain protocols
Today, automated systems can operate without human intervention and are increasingly being developed and deployed. In a presentation at the University of Delaware, NIST Transformative Networks and Services Group Leader, Tao Zhang, provided a vision of how systems could become more autonomous – not only operating without human intervention but also improving themselves.
Automated and future autonomous systems operate on a “closed loop.” They need to be trained on data to detect events, do risk analysis, and respond to events. This requires addressing many new challenges, including, for example, building machine learning models. Today, the training data often needs to be labelled by humans to support supervised machine learning – a costly and time-consuming process. We will need ways to significantly reduce human involvement in preprocessing such training data. We will further need ways to train machine learning models in a distributed manner, wherever data is located, such as at the network edge.
Automated systems need continuous learning to meet new and changing situations. The emergence of 5G and 6G could enable automated systems to go beyond edge learning to what is called “pervasive learning.” The automated system will share selected edge-collected data and edge-learned knowledge and get centralized learning help from 5G/6G core networks and clouds. Also edge and cloud systems will increasingly collaborate with each other. Where data should be processed – on the edge or in the cloud – can be determined by data availability, learning needs, computing capabilities, among other factors.
Highly automated 6G networks will enable greater and more continuous learning. 6G networks are expected to rely on an “Artificial Intelligence-Native Architecture” which will extensively use artificial intelligence, reaching to all components in a networked system. This architecture will allow greater data sharing across connected systems enabling their continuous learning and improvement.
In a “fireside chat” at the Connected Vehicles USA 2023 conference, NIST’s Transformational Networks and Services Group Leader Tao Zhang discussed future communications needs for automated vehicles. The discussion was moderated by Scott McCormick, president of Connected Vehicle Trade Association.
Zhang initially offered insights on how 5G networks will communicate with automated vehicles. Unlike much simpler 4G networks, 5G networks will be more “intelligent.” They can be configured to best provide needed and different services. More specifically, a 5G network will use “network slices” – multiple independent and virtual networks – to isolate as well as deliver these different services to such applications as automated vehicles, phones, and more.
Zhang also generally assessed the needs of these 5G networks. Their complexity will require advanced automation, involving artificial intelligence, not only to configure networks for the optimum delivery of isolated and differentiated services, but also for quality-of-service assurance. This automation must be able to measure, characterize, and control interference on these network slices in order control delays and reduce disruptions of services. And artificial intelligence will be needed to improve networks and their capabilities.
Automated vehicles will have their own onboard intelligent devices that must be trained and updated over time. Edge learning is becoming increasingly important to such continuous learning needs, as vehicles collect and generate new and vast amounts of data over time and cannot send it all to the cloud.
Additionally, Zhang pointed out that there is no guarantee that a network will always work as intended. It is therefore important for onboard and offboard applications to become more intelligent so that they can operate in a broader range of network conditions.
Distributed digital devices – such as smart phones and more – are updated and trained by means of “Federated Learning.” Basically, it involves a central server sharing a training model with selected devices. Each device updates the model with its local data and then sends it back to the central server, which converges devices’ inputs into a new model. The central server repeats the process until update criteria are met.
Federated Learning enables collaborative learning of a shared model without exposing devices’ local data. The problem with Federated Learning is that the convergence process can be slow because of data differences. The devices collect data from different sources, using different tools, under different conditions, and/or may have access only to partial or biased data, which can cause data distributions to differ among devices. Thus, the data is not what some call “independent and identically distributed” which is needed for fast Federated Learning.
NIST researchers offer an improved process in their paper Federated Learning with Server Learning for Non-IID Data, which was presented at the IEEE 57th Annual Conference on Information Sciences and Systems. Researchers refer to this improvement as “server learning.” The central server collects a small amount of training data, learns from it, and then incrementally distills the knowledge into a training model. This process enables the server to reconcile and converge differing data and doesn’t demand increased computing storage and communications for the devices. This process was aided by the researchers’ development of an algorithm and technical analysis.
Researchers evaluated the process’s accuracy and convergence rate using two datasets. The results were compared against conventional Federated Learning. Researchers found that the server learning process provided more accurate updates and training and needed only a small dataset to achieve meaningful performance gains.
This team took the view that a supply chain is a “cyber-physical system” (CPS) that uses the Internet to connect physical systems and computational components to enable coordinated operations for an intended purpose. This perspective allowed researchers to use NIST’s CPS Framework to develop a common terminology for supply chain components and stakeholders, which used different language to describe their products and needs. The CPS Framework served as a foundation for developing a model to assess supply chain resilience.
By considering a supply chain as a collection of interdependent contracts, each specifying the requirements of parties to the contract, and considering contracts as mappings between buyer and seller requirements, a supply chain can be formalized mathematically and encoded for analysis by computer. That analysis can identify potential vulnerabilities during surges in demand and provide alternatives to the breaches that can result. These researchers saw these contractual requirements as corresponding to stakeholder “concerns,” per the NIST CPS Framework, and further corresponding to the ten CPS “aspects” – high-level concerns related to “functional,” “trustworthiness,” and more.
For example, a hypothetical company XYZ Homes contracts with Lumber Yard A to:
Produce 144,000 board feet of lumber – a functionality concern/requirement
Provide above Number 2 Common grade lumber – reliability, trustworthiness concerns/requirements
Deliver 14-16 tractor trailers worth of lumber in one month – a business time concern/requirement
Thus, this team showed that contract requirements could be recast as expressions in a programming language for modeling the dynamics of a supply chain and assessing contract feasibility, satisfaction, and problem mitigation. They view this approach as a step towards reasoning about supply chain resilience. The team plans to investigate the relationship between this approach and smart contracts, which automatically execute actions according to an agreement.
Researchers and city planners have faced problems and many unanswered questions in developing and reusing Internet of Things’ services. To aid these initiatives, NIST and university researchers reviewed different solutions proposed in literature for reuse of IoT services in their recently published IoT Capabilities Composition and Decomposition: A Systematic Review, in IEEE Xplore.
Developing IoT services or “service composition” is about pulling together IoT capabilities to develop a new IoT service that provides added value; “decomposition” aims to reuse some of those services for other purposes.
NIST and university researchers developed a step-by-step approach to composing and decomposing IoT services. This effort involved tracing these steps across a layered architecture, with devices in the bottom layer; composition-ready capabilities in the next layer; and composite capabilities providing value-added service in the top layer. Steps for composing services began at the architecture’s bottom and went up, and steps for decomposing services started at the top and went down.
Using this approach, researchers determined the problems and unanswered questions in composing and decomposing IoT services, which they addressed using a systematic literature review. This review resulted in a taxonomy which covers the three aspects of composing and decomposing services:
Formal Aspect: Provides standards, frameworks, reference architectures, and formal verification, which apply to composing and decomposing services.
Technical Aspect: Covers composing and decomposing services for domains; stakeholders’ concerns; real-world implementations; automation; and measurability of novel capabilities. The use of artificial intelligence/machine learning in composing smart services is a key contribution.
Quality of Service: Describes quality of service related to scalability, interoperability, and privacy in composing and decomposing services.
Additionally, researchers addressed essential questions related to composing and decomposing services, particularly those associated with scalability, interoperability, and privacy challenges.
This session involved presentation of several high quality papers including 1) a methodology for using software-based wireless time-sensitive networking for the control of robotic manipulators in a highly contentious shared Wi-Fi channel, 2) zero-delay roaming for mobile robotic platforms, 3) channel quality prediction using statistical methods, 4) resource allocation framework for wireless sensor networks, and 5) a 60 GHz mm-Wave propagation measurement approach for industrial environments validated in a steel factory.
NIST and Intel researchers were awarded best paper at the 19th IEEE International Conference on Factory Communications Systems (WFCS 2023), in April 2023. The paper is titled, “Scheduling for Time-Critical Applications Utilizing Transmission Control Protocol in Software-Based 802.1Qbv Wireless Time Sensitive Networking” and was authored by NIST’s Rick Candell, Karl Montgomery, and Mohamed Hany, and Intel’s Dave Cavalcanti and Susruth Sudhakaran.
The paper presents a methodology for using wireless time-sensitive networking (TSN) for industrial applications. Wireless systems are key to the next generation of operational systems in factories, warehouses, retail spaces, automated buildings, etc. Relative to wired systems, wireless networks are easier and cheaper to install and maintain, have a simpler infrastructure, and enable greater system mobility. Wireless TSN enables deterministic latency and higher reliability than previously achievable using traditional Wi-Fi channel access approaches.
TSN was originally developed for Ethernet-based networks and applications utilizing the unreliable UDP transport protocol. In the paper, the NIST-Intel team demonstrated a novel method for accommodating the reliable TCP protocol which is much more widely used in factory communications systems. The results of this work are anticipated to be included in a future release of the Wi-Fi standard, IEEE 802.11.
In late April 2023, Big Data and Cognitive Computing Journal awarded the 2021 Best Paper to 6G Cognitive Information Theory: A Mailbox Perspective, which was authored by NIST’s Hamid Gharavi and University researchers. The paper proposes the “mailbox theory” to handle the 6G network’s increased data rates, which are expected be one Terabyte per second or more and over 100 times more than those of a 5G network. The paper has received 19 citations to date.
The mailbox theory envisions a 6G network that would customize services, transmit valued data, and interact with users. This network also would have intelligent applications, making it capable of transmission, storage, analysis of large data, and providing personalized access. Additionally, the network would allow users to define and schedule functions and adjust in real-time to changes in user demands. And the network would significantly reduce redundant transmissions and better ensure semantic meanings are mined, extracted, and sent.
NIST will hold a virtual workshop on Standards and Performance Metrics for On-Road Automated Vehicles (AV), September 5-8, 2023. Free registration and schedule are available online. The workshop aims to:
Connect the AV community
Provide a holistic update on NIST’s recent work in the area
Serve as a forum for feedback and discussion
Gather information that that will enable NIST to best help stakeholders
NIST and many stakeholders view these workshops as vital to the future of automated vehicles. On-road AVs are expected to significantly influence key aspects of daily life. Yet, these complex systems can pose safety risks if they do not perform as expected. Thus, industry and government agencies seek to develop new measurement science to characterize performance of these systems in order to help mitigate risks to manufacturers and consumers.