LDO Voltage Regulator, 30 mA, Adjustable 0.45 V to 0.9 V Output
Quality of Service at Layer 2 (MAC) of an 802.11 Wireless LAN System
by Amit Bansal, Wipro Technologies
With the increase in traffic in a network beyond the network capacity, the service degrades. While most applications can cope with such degradations, real time or multimedia traffic cannot. Network administrators cannot keep pace with the rapid growth in computer network traffic simply by adding capacity (and increasing bandwidth). Instead, it is better to optimize the available bandwidth.
Quality of Service (QoS1) mechanisms can provide the means to manage the network resources in an efficient manner.
This white paper discusses QoS as applicable in a Wireless Local Area Network (WLAN2) scenario, the benefits of QoS and the parameters that govern QoS in a network in general. Furthermore, discussed in detail is QoS in a WLAN environment and a study of the different techniques that can be applied to achieve the same. The reader is urged to have an understanding of the IEEE 802.11 protocol. This paper assumes that the reader is familiar with the 802.11 standard and protocols.
INTRODUCTION
Benefits of QoS
QoS mechanisms offer improved services to the end user, while reducing upgrades to the network capacity. QoS allows for better use of the existing network infrastructure, improves service to the network users and reduces the cost of providing these services, thus, increasing the ROI on network infrastructure. These mechanisms are designed for differentiating between different categories of traffic based on priority, required bandwidth, assured timing etc. One of the main goals of QoS is to provide priority including dedicated bandwidth, controlled jitter and latency (required by some real-time and interactive traffic), and improved loss characteristics. It adds predictability and controllability beyond the existing best-effort service that networks provide.
Traffic handling mechanisms
There are two important traffic-handling mechanisms for QoS: Integrated Services and Differentiated Services.
Integrated services (IntServ)
IntServ services are typically applied on a per-conversation basis. Network resources are distributed according to an application’s QoS request and are subject to available bandwidth. This mechanism is used for applications such as video and voice, which may need a periodic service. It provides end-to-end per-connection QoS with support of streaming and centralized scheduling. This method of QoS is called Layer 3 of QoS and is achieved via support of the scheduling of specific streams at specific times to minimize latency and jitter and by mapping each connection to a traffic specification. IntServ is usually associated with the RSVP signaling protocol.
Differentiated services (DiffServ)
DiffServ is an aggregate traffic handling mechanism suitable for use in large routed networks that may carry many thousands of conversations. Therefore it is not practical to handle traffic on a per-conversation basis. Hence, network resources are distributed according to traffic priority. It provides a simple mechanism for allowing traffic priorities as part of the medium access mechanism. But it is not suitable for applications like video with rigid timing constraints. This method of QoS is called Layer1 & 2 of QoS and is achieved via Class Differentiation (different parameters for each priority), and Packet Classification (eight 802.1p priority classes are mapped to 4 priority queues at each device) and Adaptation to Traffic Intensity (admission control).
Parameters
Technically QoS can be implemented as a set of parameters that govern and shape the network traffic. Every transmission can be negotiated between the server and the client, based on parameters like bandwidth, throughput, delay, jitter and loss. If the parameters provided by the server do not meet the requirements of the client, the connection may be refused. This may be the most important feature of QoS, as it allows for refusal of new connections if they might affect the existing connections. This in turn prevents degradation of the network and the QoS that has been guaranteed to the existing nodes is preserved.
Some of the important parameters related to QoS are:
Delay: This is the delay that a transmitted packet can suffer through the network. Interactive applications such as video conferencing, web cast and telephony are particularly sensitive to delay. If the delay is not constant for a given stream across the network, the receiver not only experiences delay but also jitter in the delay.
Jitter: This is the variation in delay. Jitter is an important consideration in isochronous applications and has strict values for decoding of MPEG frames. Delay and jitter affect the synchronization requirements at the client end for reconstruction of the playout file. Hence, it should be kept to a minimum within a multimedia system.
Mean data rate: This specifies the average data rate, i.e. the amount of data that will be normally sent through the network per unit time, specifying the traffic communication needs in terms of the bandwidth required.
Peak data rate: This specifies the maximum allowable data rate.
Priority: It provides a way for a transport user to indicate that some of its connections are more important than other ones. In the event of congestion, it makes sure that the high-priority connections get serviced before the low-priority ones.
QOS IN A WIRELESS SCENARIO
Inherent challenges of a wireless medium
There are intrinsic differences between a wired network and a wireless network, with the ability to instantaneously set up and tear down a network probably being one of the most notable differences. Unlike a wired network, where resources are always available and can be depended on, a wireless network can make no guarantees about network resources. Most wireless technologies can support considerably lesser bandwidth than that provided by wired technologies. A wired network can depend on network resources to be available and to provide (nearly) equal performance and bandwidth every time such resources are used, while a wireless network cannot predict network capabilities and behavior. Available bandwidth is a function of the wireless medium and the conditions of the environment in which such a wireless device is deployed. Parameters like Distance, Fading, Delay Spread, Doppler Effect, interference by other wireless devices, obstacles, blind spots, atmospheric conditions etc can change the network behavior unpredictably. Such adverse conditions have to be overcome in order to make QoS viable in a wireless network be viable.
Other challenges
Supporting QoS in a wireless environment is a challenge not just because of the unpredictability of the wireless medium, but also because of the overheads that a mobile device in such a network will face. A wireless device has not only to cope with but also make the best use of the network conditions, even with varying conditions.
If such a device is mobile, it has to cope with constantly changing signal quality and changing wireless parameters (if it can roam between networks) like supported rates, security etc. These factors add overhead in terms of handoff between networks, which might affect real time traffic and consequently the QoS at the device.
If a device is portable it will also have to cope with design restrictions like weight, power consumption and cost. Power consumption being one of the most important factors in a portable device, such devices will have to manage battery power efficiently even while maintaining the QoS and performance.
The challenge is to provide QoS in a wireless environment because of the constantly changing nature of the network. It is important to guarantee a set of QoS parameters; it is crucial to adapt to the QoS parameters that can be actually delivered. It calls for sophisticated dynamic QoS management, which must be capable of managing frequent loss and reappearance of a device on the network, and that overhead should be minimized during periods of low connectivity. All this is in contrast to a wired network, where reasonably stable presence and consistently high network quality are taken for granted.
Designing a QoS solution for a wireless system
Any QoS solution would need to address a number of issues, the biggest being the inherent unpredictable nature of the wireless medium. All scheduling algorithms must be designed keeping in mind the fact that every algorithm used must be able to adapt itself rapidly to changing network parameters. Other factors while designing a QoS solution are, support for multimedia applications, reliability, ease of use and cost. The most time critical applications on the network today are multimedia applications, especially streaming data or high quality video. Multimedia is not tolerant of excessive delays and jitter. Reliability is another important consideration, again for real time traffic like multimedia. Frames that are delayed cannot be retransmitted and are dropped; reliability then directly affects the quality of the audio or video. For a home environment, ease of use is of paramount importance.
Need for QoS in WLAN
A WLAN network is susceptible to all the parameters mentioned before. These parameters may cause variation in bandwidth, latency in data delivery, jitter and error rates. The adverse conditions prevailing in a WLAN network make QoS guarantees very important. An intelligent, adaptive QoS solution could be the answer to the unpredictable nature of the medium, leading to better utilization of the network resources.
The base IEEE 802.11 specification does not provide for QoS in WLANs, but an extension draft (IEEE 802.11e) provides for MAC extensions to support QoS in WLANs.
Applications
Some of the applications of QoS in WLAN are:
- Video streaming, widely used in applications such as playing MPEG files from the remote server, web camera, web cast, video conferencing, etc.
- Audio Streaming, widely used in the applications such as playing MP3 or WAV files from the remote server, teleconferencing, etc.
- 1394 serial bus transmission can be made as wireless transmission with limited bandwidth. 1394 uses 80% of its 100 Mbps bandwidth for delivering the isochronous data. This can cover a maximum of 6 MPEG packets of good quality. This is possible over WLAN with 2 or 3 MPEG packet transmissions.
- QOS in WLAN makes easy the deployment of wireless in consumer electronic devices such as Set Top Box, DVD players, etc.
- Voice Over IP can be made as WVOIP.
- Wireless home networking is made easy since multiple home users can connect to a single home AP with out cumbersome wiring and network infrastructure required.
QOS TECHNIQUES IN WLAN
Distributed Coordination Function (DCF3) with QoS enhancements (DiffServ)
DCF is the conventional transmission procedure followed by the WLAN devices, which is based on CSMA/CA4 . Although this method is wasteful of bandwidth, it is used since it avoids collisions which are very expensive for the wireless medium.
This conventional DCF procedure has some limitations:
- There is no concept of priority for data. Since data from all applications, irrespective of its quality, has the same priority, delay bound data may be starved for bandwidth due to equal access given to all data at the wireless medium. Even if the system has enough bandwidth for supporting such applications, it may not be able to guarantee that the real time parameters of such data will be adhered to.
- Each device has to relinquish the wireless medium, even if it has data to deliver, after it has occupied the medium for 1 complete frame, to allow equal access and priority to every device on the medium. This effectively caps the maximum burst transfer from a device. For this device to send data again, it has to repeat its back-off procedure.
- Data from different applications is associated with different levels of priority, based on its quality requirements, for e.g. voice is given a higher priority than video and data. Thus, higher quality data can be delivered efficiently based on the priority of the data, which guarantees that higher classes of data are provided a greater share of the bandwidth as compared to lower classes of data. Contention happens at two levels, one internal to the device and another at the wireless medium. Internal contention is based on the priority of the data, and the handicaps of the data streams at the device are arranged in such a way as to increase the chances of higher priority streams winning the contention.
- A device that wins contention of the wireless medium can hold the medium longer than before, and can perform multiple data burst transfers. The medium can be held for as long as the TXOP5 lasts. A TXOP guarantees a certain amount of bandwidth for each priority of data. The TXOP is used to cap the maximum burst transfer from a device, based on the priority of the data being transferred from the device.
Point Coordination function (PCF6 ) with QoS enhancement (IntServ)
PCF is an optional conventional transmission procedure that can be followed by the WLAN devices, which is based on polling. This method leads to minimal wastage of the medium, since data is transferred with small waits between frames.
This conventional PCF has some limitations:
- Traffic cannot be turned into streams with different parameters. There is no way to apply parameters like BW, periodicity, etc. There is no mechanism to communicate with the AP about the traffic characteristics.
- While the air utilization is very efficient, the polling mechanism is not. Devices are polled using a round-robin algorithm, and are dropped from the polling list for that cycle if they do not have data. Devices cannot accumulate a bunch of frames and deliver them together. Thus, CBR or periodic traffic cannot be handled.
- When polled, each device can deliver only 1 frame (unfragmented), or sub-frame (if fragmented) to the AP. There is an overhead of 1 poll to retrieve 1 frame from a device.
In a WLAN QoS environment, the conventional PCF has been enhanced to overcome its limitations and is called Hybrid Coordinator Channel Access (HCCA).
- Traffic can be turned into “streams” or pipes, with each stream associated with a set of parameters. Once negotiated, it does not matter what kind of traffic is transmitted within these pipes. The specification provides a mechanism to communicate and negotiate these QoS parameters with the AP.
- The polling mechanism is not round-robin and is very efficient. The scheduling algorithms can be very robust, efficient and adaptive, since devices will accumulate frames until polled by the AP.
- When polled, each device can deliver multiple frames to the AP, depending on its requirements which it has negotiated with the AP at setup. The medium can be held as long as the TXOP lasts.
While EDCA will satisfy prioritization of traffic at every device, in a congested channel EDCA is not enough to maintain stringent QoS parameters at a device. A device that has more data to deliver and is not winning contention with other devices, cannot guarantee the expected QOS of the applications. Another method is needed, which would bypass the contention and backoff mechanism and allow a particular device exclusive access to the medium. A Control Access Period (CAP7) in HCCA is a period when access to the wireless medium is controlled and capped. It is a period of time when the right to use the medium has been handed to a device by the central coordinator, the AP. By using lesser wait periods to access the medium, the AP can take control of the medium and provide a TXOP to a device. This method guarantees the data transfer to/from a particular device, irrespective of congestion on the channel. It can be used to schedule data transfer at periodic intervals, and guarantees such parameters for the data stream like periodicity, delay-bound, jitter etc. Each device requests the AP for such guaranteed TXOPs, and the parameters it needs for the quality of service. If the AP cannot guarantee such parameters, it shall reject the request.
In parametric applications, bandwidth by itself cannot guarantee satisfactory performance. Real time multimedia applications which require timely, orderly and relatively error-free delivery of packets of digital video or audio in a continuous manner have the source and destination ports as real devices, like encoders and decoders, etc. Such devices will deliver and expect the defined amount of data at certain regular intervals. Maintaining the satisfactory performance in such kind of applications is a basic feature of HCCA.
HCCA is a centralized admission control where the onus for bandwidth measurement lies with the AP. The AP alone has immediate access to the medium and is responsible for providing access to the medium to every device depending on the negotiated parameters when the device joined the network (by taking control of the medium whenever it needs). It is responsible for denying newer devices entry into the medium if the channel is congested, accepting or rejecting an existing device’s request for more bandwidth etc.
CAP in the Contention Period (CP8)
A CAP in the CP is a better solution than EDCA, in as much that it can be used to control access to the medium and can be used to guarantee many parameters of QoS to devices that EDCA cannot. This method, while it provides great benefits as compared to EDCA, does not provide for fine control of the medium. Any legacy device using DCF or an 802.11e device using EDCA can interfere with the scheduling at the AP, and due to its data transfer may delay the scheduled data transfer (the CAP) at a particular device. Moreover, for better access to the medium, CAPs may need to be protected by an initial frame transfer of a frame (RTS9 ) designed to prevent other devices from contending for the medium. This adds a marginal overhead.
An advantage to using the CP period to perform a CAP is that the medium returns to EDCA when the AP does not any device to poll for uplink, or any downlink data to deliver. There are no gaps on the medium.
CAP in the Contention Free Period (CFP10)
The CFP is probably the best mechanism to use the channel efficiently and allow for fine control of the medium by the AP. This is a duration of time on the medium where the AP is in sole control. No devices (legacy or QoS aware) can contend for the medium and interfere with the AP’s scheduling and the best QoS parameters can be guaranteed to devices by the AP during this period. CAPs in CFP are controlled by the AP, which hands over the right to use the medium for a TXOP duration to a device that had requested for the same. Since there are no contending devices during CFP, the AP can have very fine control over delay and jitter parameters for time-critical traffic. There is little or no overhead, since devices do not waste valuable bandwidth while backing-off, CAP protection using the RTS frame is not needed, multiple CAPs can be scheduled by the AP following each other and using the smallest timing intervals to separate every CAP.
A disadvantage to using the CFP Period for CAPs due to the nature of the CFP is that during the CFP none of the devices can contend for the medium. Hence if the AP does not want to poll any device for a duration of time, and does not have any data to deliver either, the medium is idle and wasted until the uplink or downlink can resume.
Sidelinks – bypassing the AP
802.11 need a central coordinator for all transactions. This central coordinator acts as the gateway for the wireless devices to the external wired world. This central coordinator also acts as a bridge between two wireless devices on the network. Since wireless devices are not required to have any knowledge about remote devices (whether such a device is wired or wireless, accessible, or it is in power save, etc), direct communication between two wireless stations is not defined in conventional WLANs. In case of two wireless devices that are associated to the same AP, since direct communication is not allowed, double bandwidth is utilized for data transfer between them (from the first device to the AP, then from the AP to the second device). Sidelink techniques can eliminate such an unnecessary waste of the wireless medium. Sidelinks in 802.11e are called Direct Link Setup (DLS).
In this method, a device that needs to deliver data to another has to first identify and discover the properties of the other device through the AP. If it finds that the destination device is a wireless device in its vicinity, it will try and establish a direct connection with the device and bypass the AP for data transfer. Such a mechanism needs complete control of the medium (so that it does not unknowingly interfere with another data transfer happening on the shared medium), and is best attempted during a CAP. When the AP hands over control to a particular device for TXOP duration, it can perform such a technique and can save valuable medium bandwidth.
This method is not without its risks. It may lead to the hidden-node11 problem. Distance plays a crucial role in wireless communication. It could also so happen that the device to which a direct connection is being established is so far away that the link quality is degraded or the link might even be poor enough to warrant transferring the data to the AP and having it forward it to the remote device. A device, before delivering the actual data to the destined station through a side link, may choose to validate the channel quality of the network by transmitting some randomly generated data of different lengths. Depending upon the acknowledgement of the same from the remote device, the station can adjust the frame size and power and deliver the actual data with a greater probability of success.
Further, this mechanism can only be used when both devices are in the wireless network.
Consolidated/No acknowledgement
Since wireless networks are more prone to interference, the concept of acknowledgement for each transmitted packet is essential for the data integrity. However, in a well designed and well deployed WLAN, data integrity is assured and the probability of failure is minimal enough to be considered an exception. The mandatory ACK12 frame for each packet plays a significant role in decreasing available bandwidth. If failures of packets are few and far between, retries of such packets can be left to higher layers without any loss in functionality.
Further, given the nature of time-critical applications and isochronous traffic13 , which require a quality of service based on human perceptual considerations for voice, video, audio and interactivity have no need for acknowledgement. Corrected data, which reaches late, is simply of no use. Such streaming applications can be exempted from the normal ACK policy.
In asynchronous applications like FTP, telnet, etc, data integrity is very essential and acknowledgement is necessary. However, instead of transmitting an ACK for each packet on reception, it can be transmitted for a set of received packets using the sequence number of the data. This consolidated ACK transmission will save significant amount of bandwidth. The transmitter may retry the specific data frame that is not acknowledged through the consolidated ACK. The consolidated ACK scheme in 802.11e is called Block Acknowledgement.
CONCLUSION
Latest Trends
The IEEE 802.11e specification has yet to be accepted as a complete specification from draft form. The WiFi body, which is the marketing and certification body for Wireless LAN systems came up with their own 802.11e EDCA and HCCA standards, very closely based on the IEEE version of the protocol. The WiFi version of the 802.11e EDCA protocol is called Wireless Multi-Media and (WMM) WiFi version of the 802.11e HCAA protocol is called Wireless Multi-Media Scheduled Access (WMM-SA). The long term goal of these specifications is the 802.11e standard.
These specifications will go the way of WPA, and the upcoming WPA2. The products which support IEEE 802.11i TKIP (a security protocol) have got themselves WPA certified from WiFi, and sport the logo prominently. Since IEEE does not certify devices, the WiFi does a wonderful job of assuring interoperability by forcing manufacturers to get their products WiFi certified. There is no other body which performs such certification for Wireless LAN products.
At the time of this writing, WiFi is offering WiFi certification for WMM, and will soon begin certifying devices for WMM-SA.
From a technical point of view, WMM is 802.11e EDCA without optional features like Admission Control. WMM-SA is HCCA without optional features like Sidelinks, Consolidated Acknowledgement. A sub set of features from the 802.11e specification have been selected to allow product manufactures to design, manufacture, certify and deploy products that will eventually lead the way to the complete 802.11e standard.
Limitations of the specifications
There are some open issues in the specification.
The first is roaming, which could be mobility at Layer 2 between two APs on the same subnet, or at Layer 3 when the device crosses subnets. In both these two scenarios, the handoff technique for seamless roaming has to be perfected for maintaining the existing QoS guarantees.
What is well known is that EDCA is not the mechanism for handling periodic traffic. HCCA was designed to be used to handle periodic traffic; however the HCCA mechanisms are best suited for CBR traffic characteristics. The HCCA scheduling scheme does not allow itself for supporting VBR traffic. It is also not an adaptive mechanism which allows for change of the negotiated parameters of the stream due to changing channel conditions.
APPENDIX
Wipro’s solution
Wipro Technologies is a forerunner in the Wireless LAN Intellectual Property solutions. Our 802.11 solution has been WiFi certified for 802.11a-WPA, and we have passed the pre-certification for WMM. We have licensed our solutions to multiple vendors and were among the first to demonstrate a working QoS solution.
We have a complete solution around the 802.11e specification as well as support for the WiFi versions, i.e. WMM and WMMSA. The solution leverages the legacy MAC architecture and builds upon Wipro’s 802.11 DCF and PCF solutions to offer highly optimized EDCA and HCCA solutions. It is designed to use a single processor, like an ARM9 with the firmware stack running on the processor. The hardware stack is synthesizable verilog and contains all the time critical operations. The protocol features are partitioned to reduce both hardware gate count and software MIPS requirements.
While time to market is an important factor, Wipro takes pride in producing complete solutions with support for the optional features mentioned in the specification. We support Admission Control in EDCA (an optional feature in EDCA, and not supported by WMM). We support the use of Contention Free Period (CFP) for HCCA; a feature which we believe is very powerful as mentioned in the section CAP in the Contention Free Period (CFP). We also support DLS (Sidelinks) and Block ACKs (Consolidated ACK). The Block ACK scheme that we support is an efficient hardware mechanism for immediate response to the remote device with status about the received frames.
The partitioning and the use of HW accelerators led to the creation of an efficient solution, optimized for the WLAN protocols and high on performance and extensibility. A key feature of the Wipro stack is performance and extensibility. We intend to modify the solution to support an up coming WLAN high speed protocol, IEEE 802.11n, which promises to deliver 100 Mbps above the MAC. 802.11n assumes that highly efficient QoS mechanisms exist in order to deliver peak performance, and the Wipro solution is well suited for use in 802.11n systems.
ACRONYMS
QoS : Quality of ServiceWLAN : Wireless Local Area Network
DCF : Distributed Coordination Function
EDCA : Enhanced Distributed Channel Access.
PCF : Point Coordination Function
CAP : Controlled Access Period
CP : Contention Period
RTS : Request To Send
CFP : Contention Free Period
REFERENCES
IEEE 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications. IEEE.
IEEE 802.11e: Wireless LAN medium access control (MAC) enhancements for Quality of Service
ABOUT THE AUTHOR
Amit Bansal is a HW Architect with the Semiconductor IP group of Wipro. His domain of expertise is wireless protocols. He has designed system solutions around the 802.11 suite of protocols for the last few years.
ABOUT WIPRO TECHNOLOGIES
Wipro is the first PCMM Level 5 and SEI CMMi Level 5 certified IT Services Company globally. Wipro provides comprehensive IT solutions and services (including systems integration, IS outsourcing, package implementation, software application development and maintenance) and Research & Development services (hardware and software design, development and implementation) to corporations globally.
Wipro's unique value proposition is further delivered through our pioneering Offshore Outsourcing Model and stringent Quality Processes of SEI and Six Sigma.
WIPRO IN SEMICONDUCTOR IPs
Wipro is a leader in the semiconductor IP space, with IP solutions for both wireless as well as wired connectivity. Wipro offers IPs in the areas of UWB, 802.11 (a, b & g), IEEE 1394 (a&b), DTVs, USB and Ethernet. The suite of IPs is well complemented by a diverse set of embedded and product engineering service teams worldwide.
For further information visit us at: http://www.wipro.com/semi-ip
For more whitepapers logon to: http://www.wipro.com/insights
© Copyright 2005. Wipro Technologies. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without express written permission from Wipro Technologies. Specifications subject to change without notice. All other trademarks mentioned herein are the property of their respective owners. Specifications subject to change without notice.
1 Quality of Service: It is the capability of a network to provide better service to selected network traffic. The goal of QoS is to provide guarantees on the ability of a network to deliver predictable results.
2 Wireless Local Area Network: A wireless LAN is one in which a mobile user can connect to a local area network through a wireless (radio) connection over short distances instead of using traditional network cabling.
3 Distributed Coordination Function: This is a channel access function that is based on unweighted and equal contention by all devices for the medium.
4 Carrier Sense Multiple Access with Collision Avoidance: A network control protocol in which the channel is sensed after random intervals; the device can transmit if the medium is found to be free.
5 Transmit Opportunity: An interval of time when a particular device has the right to initiate transmissions onto the wireless medium.
6 Point Coordination function: This is a channel access function which is based on a master-slave configuration of the AP with the stations associated with it
7 Controlled Access Period: A duration of time on the medium reserved for a particular device. In this duration, the device has sole rights to transmit, or receive.
8 Contention Period: This is a duration of time on the network when all devices equally contend for the medium.
9 Request To Send: A frame that is used to reserve the medium for a period of time during which devices must not contend for the medium.
10 Contention Free Period: This is a duration of time on the network when no device should contend for the medium. The right to initiate transmission on the medium is given to a device by the coordinator.
11 Hidden-node: This problem occurs when a wireless node cannot hear one or more of the other nodes; therefore the media access protocol cannot function properly.
12 Acknowledgement: A frame that is used to acknowledge the last transmission from a device.
13 Isochronous traffic: Traffic that is delay sensitive, like voice and real-time video.
|
Related Articles
New Articles
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- PCIe error logging and handling on a typical SoC
- UPF Constraint coding for SoC - A Case Study
E-mail This Article | Printer-Friendly Page |