Author: Neftaly Malatjie

  • 114061 LG 1.47 Packet Switching:

    Always-on connection, where available bandwidth is shared between several users. No time-based charging. Charging is based on committed traffic rate. This type of connection is more appropriate for bursty data transfers. Special configuration is needed to support strict QoS requirements. Frame Relay is a packet switching connection type.

    High Level Data Link Control (HDLC)

    HDLC is a data-link layer protocol and because of the fact that there is no standard way of identifying the type of network protocol carried within the HDLC encapsulation, each vendor uses its own proprietary HDLC protocol.

    Cisco uses its own HDLC implementation; therefore Cisco routers are not able to communicate with equipment running other vendors’ HDLC implementation. Nevertheless, HDLC is the default encapsulation used by Cisco routers on synchronous serial links (leased line connections). When communicating with a non-Cisco device, synchronous Point-to-Point protocol (PPP) is the more feasible option to use.

    On Cisco routers use the show interface command on serial interfaces to see the configured encapsulation method.

    To see the physical connection type used, issue the show controllers command:

    Point-to-Point Protocol (PPP)

    PPP data link protocol is used on serial connections between dissimilar routers, for example a Cisco router and a non-Cisco router. PPP is designed to allow the simultaneous use of multiple network layer protocols and also supports two types of hostname authentications CHAP (Challenge Handshake Authentication Protocol) and PAP (Password Authentication Protocol).

    PPP uses the services of the HDLC protocol for encapsulating datagrams over serial links. Moreover, it uses two additional control protocols to support its operation:

    • Link Control Protocol (LCP) provides the means for configuring, establishing, maintaining and terminating the PPP connection. Among other things, LCP handles PPP authentication methods, error detection, compression techniques, support

    for multilink etc.

    • Network Control Protocol (NCP) provides the means for encapsulating multiple network layer protocols across the PPP data link.

    Use the show interface command to verify PPPs operation.

    Notice from the output of the show interface serial 1/0 command the PPP encapsulation type. Also, notice that LCP is Open meaning is being running and maintaining the PPP connection. Finally, the last line is associated with the NCP. It shows that IP, CDP and AppleTalk are open.

    Frame Relay

    Frame Relay is a packet-switched technology. No connection setup phase takes place prior to data transmission. Moreover, the network infrastructure is shared among different users in contrast to leased line connections where the whole amount of bandwidth is always dedicated to the corresponding user. The main characteristics of Frame Relay technology are presented below:

    • Contract terms are signed between the customer and service provider. Mainly the contract consists of a so-called Committed Information Rate (CIR) which is the amount of bandwidth the service provider has contractually guaranteed to provide to the customer at all times. The later may use more bandwidth if the network infrastructure is not congested, however this excess traffic is not guaranteed at all.
    • Big money saved for both customer and service provider. The customer makes use of this packet switched technology at much lower price compared to the leased line option. From the other hand, the service provider does not have to install and maintain a huge number of leased line connections which always consume the whole bandwidth tube even if they are not really used.
    • Frame Relay on Cisco routers is configured on serial interfaces. Unlike HDLC or PPP, configuring Frame Relay is achieved by specifying the appropriate encapsulation type among Cisco and IETF (Internet Engineering Task Force). The default encapsulation used for Frame Relay on Cisco routers is you guess correctly Cisco of course.
    • Frame Relay uses what is called virtual circuits to route data across the service providers infrastructure towards the other communicating end. Service providers mainly use Permanent Virtual Circuits (PVCs) within their network to route packets forth and back. The PVCs once created remain operating as long as the customer pays the bill.
    • PVCs are identified by the use of Data Link Connection Identifiers (DLCIs) which are typically assigned by the provider to end devices. These identifiers have only local significance in the sense that they are used to identify a specific data link and not the entire virtual circuit end-to-end. According to the DLCIs values assigned to the customers, the service provider is able to route packets appropriately.

    Integrated Services Digital Network (ISDN)

    ISDN is a Circuit Switched technology that is designed to run over existing telephone networks. It is a fully digital technology end-to-end. It consists of a number of protocols for transferring data, voice and video over the traditional telephone system. ISDN has the following major characteristics:

    • Faster data transmission compared with analog modem connection.
    • Perfect candidate for establishing a backup connection to a leased line connection.
    • Comes with two flavors:
      • ISDN Basic Rate Interface (BRI) service also known as 2B+D consists of two data channels (B channels) that operate at 64 Kbps each and a single signaling channel (D channel) that operates at 16kbps.
      • ISDN Primary Rate Interface (PRI) also known as 23B+D in North America and Japan and 30B+D in Europe. In the case of 23B+D, it consists of 23 data channels operating at 64kbps each and one signaling channel operating at 64kbps as well.
    • To be able to connect a Cisco router to the ISDN network you can either use a router with a built-in NT1 (U) interface (ISDNs two wire connection that runs into the home or office) or use an ISDN terminal adapter (TA) along with your routers serial interface


  • 114061 LG 1.46 Circuit Switching:

    The concept of this WAN connection is based on the typical telephone switching network. A connection needs to be established prior to be able to transfer data. This type of connection is used for low bandwidth data transfers where charging is calculated based on actual connection time. ISDN (Integrated Services Digital Network) protocol is basically used on this connection type.

  • 114061 LG 1.45 PRINCIPLES OF NETWORK INTERCONNECTIONS

    A number of different WAN connection types exist today. Choosing the right WAN connection type is up to you, but the information in this article will make your decision process much easier.

    WAN Connection Types

    Leased Line:

    This is considered to be a dedicated point-to-point connection type where a permanent communication path exists between a Customer Premise Equipment (CPE) on one site and a CPE at the remote site communicating through a Data Communicating Equipment (DCE) within the providers’ site. Synchronous serial lines are used for this connection and the most frequent protocols observed in these lines are HDLC (High-Level Data Link Control) and PPP (Point-to-Point Protocol). When cost in not an issue, you should use this type of connection.

  • 114061 LG 1.44 Congestion in Connectionless Packet-switched Networks

    A network is congested when one or more network components must discard packets due to lack of buffer space. Given the above architecture, it is possible to see how network congestion can occur. A source of data flow on the network cannot reserve bandwidth across the network to its data’s destination. It, therefore, is unable to determine what rate of data flow can be sustained between it and the destination.

    If a source transmits data at a rate too high to be sustained between it and the destination, one or more routers will begin to queue the packets in their buffers. If the queueing continues, the buffers will become full and packets from the source will be discarded, causing data loss. If the source is attempting to guarantee transmission reliability, retransmission of data and increased transmission time between the source and the destination is the result. Figure 2 from [Jain & Ramakrishnan 88] demonstrates the problem of congestion.

    As the load (rate of data transmitted) through the network increases, the throughput (rate of data reaching the destination) increases linearly. However, as the load reaches the network’s capacity, the buffers in the routers begin to fill. This increases the response time (time for data to traverse the network between source and destination) and lowers the throughput.

    Once the routers’ buffers begin to overflow, packet loss occurs. Increases in load beyond this point increase the probability of packet loss. Under extreme load, response time approaches infinity and the throughput approaches zero; this is the point of congestion collapse. This point is known as the cliff due to the extreme drop in throughput. Figure 2 also shows a plot of power, defined as the ratio of throughput to response time. The power peaks at the knee of the figure.

    1. Transmission overhead and message size

         
    Figure 3: The transmission overhead against packet size for message length 1000.

    We define the transmission overhead to be the number of bits, communicated for a message, that do not represent the data bits of the message. In particular, packet headers and acknowledge packets determine the transmission overhead. Most forms of serial communication use encoding, e.g., adding stop bits, to ensure that the receiving side correctly interprets the serial bit stream. DS links use an encoding scheme that extends each byte, 8 bits, to a 10 bit token. The transmission overhead is therefore at least 20 %.

    Figure 2 shows the format of data and acknowledgement packets that we use for the DSNIC protocol. It shows that, apart from the DS link routing header, three characters are used for protocol specific information, independent of the payload size. Together with the acknowledgement scheme, this allows us to calculate the transmission overhead for different packet sizes. Figure 3 shows the transmission overhead against packet size, or, more precisely, against payload size, for sending a 1000 byte message. The packet size strongly influences the required number of packets and packet headers, and thereby the transmission overhead.

    Figure 4: The maximum network throughput versus packet size.

    Figure 4 shows the influence of the packet size on the maximum network throughput for a 512 end-node Clos network under random traffic. This graph shows an optimal network throughput for packet size 28. For packets smaller than 28, the network throughput drops due to the domination of the transmission overhead. For packets larger than the optimum, the network throughput becomes worse due to network congestion.

    The header of each packet needs to be processed by the DSNIC. This processing requires time. Using a small packet size, such as the optimal 28, requires a lot of processing to achieve full bandwidth communication. To keep this processing from becoming the system’s bottleneck, we choose not to fix the packet size, but to make it adaptable so that its influence on the performance of the DSNIC can be investigated. We only support powers of two for the packet size to accommodate the implementation.

    Other factors that affects the response times on a LAN include;

    ·         Speed of devices

    ·         Processing time

    ·         Priority nodes

    ·         Quality of transmission.


  • 114061 LG 1.43 FACTORS AFFECTING WAN REPONSE TIMES

    The following are factors that affect WAN response times;

    1. Bandwidth of transmission line

    A network has a finite amount of bandwidth, and if there are too many devices or applications connected to the network, they compete for the available bandwidth, resulting in slow data speeds and overall poor performance. Many routers offer dual-band or tri-band technology, which frees up bandwidth by offering multiple channels for spreading the workload. The more people in an office utilizing a LAN network at the same time, the more bandwidth is needed. 

    1. Queuing at notes or hosts

    Network congestion in data networking and queuing theory is the reduced quality of service that occurs when a network node is carrying more data than it can handle. Typical effects include queuing delaypacket loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

    Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

    All the links on a network are joined together by routers. These forward packets arriving on incoming links to the appropriate outgoing links, to ensure that the packet is routed to its intended destination. Figure 1 shows the basic architecture of a router.

    The router is connected to   incoming links and   outgoing links.   and   may be different, although   usually equals  . In most situations, input and output links are paired to form either a full-duplex channel where data can flow in both directions simultaneously, or a half-duplex channel where data can flow in only one direction at a time.

    Incoming packets are buffered in the input buffers. Once in the buffer, they are selected by the Packet Selection Function to be passed to the Routing Function. The Routing Function determines on to which outgoing link a packet must be placed for it to get closer to its destination: this is done with a routing table which, as described earlier, is semi-static. When the correct link for the packet is found, the Routing Function passes the packet to the Packet Dropping Function for queuing on the output buffer for the link. When the packet reaches the other end of the queue, it is transmitted over the link to the next router, or to the final destination.

    The Packet Selection Function can choose any of the packets queued in the input buffers for routing by the Routing Function. Normally this would be done in a First-In First-Out method, but other selection methods may be useful in a congested environment.

    There are two fundamental bottlenecks in the given router architecture. Firstly, there is a minimum time needed for the router to decode the network header of the incoming 

    Packet, determine the route for the packet, and pass the packet to an outgoing link for transmission. There is also a delay for the packet to be transmitted on the outgoing link: this may just be the packet’s transmission time for a full-duplex link, or there may be an extra delay for the link to become clear on a half-duplex link. These delays form one bottleneck.

    The second bottleneck indicates that the router must be prepared to buffer output packets, to prevent them from being lost while the router waits for an outgoing link to become clear. The two bottlenecks together indicate that the router must buffer incoming packets, to prevent them from being lost if they arrive too quickly for the router to process.

    By definition, the router’s input and output buffers are finite. If a buffer becomes full, then no more packets can be queued in the buffer, and the router has no choice but to discard them. This causes data loss in the data flow between a source and destination, and usually causes the source to retransmit the data.

    Although a router cannot queue a packet if the corresponding output buffer is full, it can choose either to discard the unqueued packet, or to discard a packet already in the output queue, and then queue the unqueued packet. This choice is performed by the Packet Dropping Function. This cannot be done for the input buffers, as the packet has not been placed in the router’s internal memory until it has been queued in the input buffers. Thus, packets may be lost on input, and the router has no control over which packets are lost.

    Finally, the router’s buffers may share a common memory space, or they may have individual memory spaces. With the former, no buffer can become full until the router’s entire memory space is allocated to buffers, at which point all buffers become full. With the latter, any buffer’s utilisation has no influence on any other buffer’s utilisation.