Pages

Saturday 28 September 2024

Channelization Protocol

It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared across multiple stations based on their time, distance and codes. It can access all the stations at the same time to send the data frames to the channel.Following are the various methods to access the channel based on their time, distance and codes:

FDMA (Frequency Division Multiple Access)
TDMA (Time Division Multiple Access)
CDMA (Code Division Multiple Access)

Frequency Division Multiple Access (FDMA): FDMA is a type of channelization protocol. This bandwidth is divided into various frequency bands. Each station is allocated a band to send data and that band is reserved for the particular station for all the time which is as follows.
The frequency bands of different stations are separated by small bands of unused frequency and unused frequency bands are called as guard bands that prevent the interference of stations. It is like the access method in the data link layer in which the data link layer at each station tells its physical layer to make a bandpass signal from the data passed to it. The signal is created in the allocated band and there is no physical multiplexer at the physical layer. 
Time Division Multiple Access (TDMA) : TDMA is the channelization protocol in which bandwidth of channel is divided into various stations on the time basis. There is a time slot given to each station, the station can transmit data during that time slot only which is as follows.
Each station must aware of its beginning of time slot and the location of the time slot. TDMA requires synchronization between different stations. It is type of access method in the data link layer. At each station data link layer tells the station to use the allocated time slot. 

Code Division Multiple Access (CDMA) : In CDMA, all the stations can transmit data simultaneously. It allows each station to transmit data over the entire frequency all the time. Multiple simultaneous transmissions are separated by unique code sequence. Each user is assigned with a unique code sequence.
In the above figure, there are 4 stations marked as 1, 2, 3 and 4. Data assigned with respective stations as d1, d2, d3 and d4 and the code assigned with respective stations as c1, c2, c3 and c4. 

Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the controlled access method, each station interacts and decides to send a data frame by a particular station approved by all other stations. It means that a single station cannot send the data frames unless all other stations are not approved. It has three types of controlled access: 

Reservation
Polling
Token Passing.

1. Reservation
In the reservation method, a station needs to make a reservation before sending data.

The timeline has two kinds of periods:
Reservation interval of fixed time length
Data transmission period of variable frames.

If there are M stations, the reservation interval is divided into M slots, and each station has one slot.

Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station is allowed to transmit during this slot.

In general, i th station may announce that it has a frame to send by inserting a 1 bit into i th slot. After all N slots have been checked, each station knows which stations wish to transmit.

The stations which have reserved their slots transfer their frames in that order.
After data transmission period, next reservation interval begins.

Since everyone agrees on who goes next, there will never be any collisions.

The following figure shows a situation with five stations and a five-slot reservation frame. In the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station 1 has made a reservation.
2. Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a controller sends a message to each node in turn.

In this, one acts as a primary station(controller) and the others are secondary stations. All data exchanges must be made through the controller.

The message sent by the controller contains the address of the node being selected for granting access.

Although all nodes receive the message the addressed one responds to it and sends data if any. If there is no data, usually a “poll reject”(NAK) message is sent back.

Problems include high overhead of the polling messages and high dependence on the reliability of the controller.

3. Token Passing
In token passing scheme, the stations are connected logically to each other in form of ring and access to stations is governed by tokens.

A token is a special bit pattern or a small message, which circulate from one station to the next in some predefined order.

In Token ring, token is passed from one station to another adjacent station in the ring whereas incase of Token bus, each station uses the bus to send the token to the next station in some predefined order.

In both cases, token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send that frame before it passes the token to the next station. If it has no queued frame, it passes the token simply.

After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbours and the other N – 1 stations to send a frame, if they have one.

There exists problems like duplication of token or token is lost or insertion of new station, removal of a station, which need be tackled for correct and reliable operation of this scheme.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station can send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track of the status of the channel to be idle and broadcast the frame unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the station must wait for a random time (not continuously), and when the channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode defines that each node senses the channel, and if the channel is inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and resumes the frame with the next time slot.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a frame to check whether the transmission was successful. If the frame is successfully received, the station sends another frame. If any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a random time before sending a frame to a channel.

CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data frames. It is a protocol that works with a medium access control layer. When a data frame is sent to a channel, it receives an acknowledgment to check whether the channel is clear. If the station receives only a single (own) acknowledgments, that means the data frame has been successfully transmitted to the receiver. But if it gets two signals (its own and one more in which the collision of frames), a collision of the frame occurs in the shared channel. Detects the collision of the frame when a sender receives an acknowledgment signal.

Difference between CSMA/CD and CSMA/CA :

  CSMA / CD

CSMA / CA

It is the type of CSMA to detect the collision on a shared channel.

It is the type of CSMA to avoid collision on a shared channel.

It is the collision detection protocol.

It is the collision avoidance protocol.

It is used in 802.3 Ethernet network cable.

It is used in the 802.11 Ethernet network.

It works in wired networks.

It works in wireless networks.

It is effective after collision detection on a network.

It is effective before collision detection on a network.

Whenever a data packet conflicts in a shared channel, it resends the data frame.

Whereas the CSMA CA waits until the channel is busy and does not recover after a collision.

It minimizes the recovery time.

It minimizes the risk of collision.

The efficiency of CSMA CD is high as compared to CSMA.

The efficiency of CSMA CA is similar to CSMA.

It is more popular than the CSMA CA protocol.

It is less popular than CSMA CD.

Friday 27 September 2024

E-Mail Server using CPT

The email refers to the electronic means of communication of sending and receiving messages over the Internet.

Components of an Email:

Sender: The sender creates an email in which he records the information that needs to be transferred to the receiver.

Receiver: The receiver gets the information sent by the sender via email.

Email address: An email address is just like a house address where the communication arrives for the sender and receiver and they communicate with each other.

Mailer: The mailer program contains allows the ability to read, write, manage and delete the emails like Gmail, Outlook, etc.

Mail Server: The mail server is responsible for sending, receiving, managing, and recording all the data proceeded by their respective mail programs and then processing them to their respective users.

SMTP: SMTP stands for Simple mail transfer protocol. SMTP basically uses the internet network connection to send and receive email messages over the Internet.

Protocols of Email:Emails basically use two types of standard protocols for communication over the Internet. They are:

POP: POP stands for post office protocol for email. Similar to a post office, our approach is just to drop the email over the service mail provider and then leave it for services to handle the transfer of messages. We can be even disconnected from the Internet after sending the email via POP. Also, there is no requirement of leaving a copy of the email over the web server as it uses very little memory. POP allows using concentrate all the emails from different email addresses to accumulate on a single mail program. Although, there are some disadvantages of POP protocol like the communication medium is unidirectional, i.e it will transfer information from sender to receiver but not vice versa.

IMAP: IMAP stands for Internet message access protocol. IMAP has some special advantages over POP like it supports bidirectional communication over email and there is no need to store conversations on servers as they are already well-maintained in a database. It has some advanced features like it tells the sender that the receiver has read the email sent by him.

Working of Email:

When the sender sends the email using the mail program, then it gets redirected to the simple mail transfer protocol which checks whether the receiver’s email address is of another domain name or it belongs to the same domain name as that of the sender (Gmail, Outlook, etc.). Then the email gets stored on the server for later purposes transfer using POP or IMAP protocols.

If the receiver has another domain name address then, the SMTP protocol communicates with the DNS (domain name server) of the other address that the receiver uses. Then the SMTP of the sender communicates with the SMTP of the receiver which then carries out the communication and the email gets delivered in this way to the SMTP of the receiver.

If due to certain network traffic issues, both the SMTP of the sender and the receiver are not able to communicate with each other, the email to be transferred is put in a queue of the SMTP of the receiver and then it finally gets receiver after the issue resolves. And if due to very bad circumstances, the message remains in a queue for a long time, then the message is returned back to the sender as undelivered.




OUTPUT:



Monday 16 September 2024

Random Access: ALOHA

ALOHA is a simple communication protocol used in computer networks, specifically in wireless systems. Developed in the 1970s at the University of Hawaii, ALOHA was designed for data communication over radio waves.
ALOHA is a contention-based protocol, where multiple users share a communication medium (e.g., a wireless channel). The protocol allows devices to send data whenever they have data to transmit, without prior coordination.
There are two primary versions of ALOHA: 
  1. Pure ALOHA 
  2. Slotted ALOHA.
1. Pure ALOHA:
In Pure ALOHA, any device can send data whenever it has data to send. The downside of this approach is that data packets can collide if more than one device transmits simultaneously. 

These collisions lead to data loss and require retransmissions. 

To manage collisions, the sender waits a random amount of time before attempting to resend the data. 

The maximum throughput of Pure ALOHA is 18.4%, meaning only 18.4% of the channel’s capacity is effectively utilized due to collisions.

Vulnerable Time of pure ALOHA
Vulnerable time is the time in which the collision of data packets occurs. If the first frame ‘B’ is sent at any particular time t , before the data is transmitted completely and the other frame ‘A’ starts before the completion of frame ‘B’ will lead to a collision.
Where T is the Transmission time of the frame  Tt
The vulnerable time when the collision occurs (Vt) = [(t+Tt) -(t-Tt)]
                                                                         = t+Tt - t+Tt
                                                                         = 2 * Tt 

Vulnerable time is also referred to as the propagation time or total time taken to transmit the complete data packet from the station.

Vulnerabletime=(Message Length) / (Transmission channel bandwidth)

Channel Throughput of Pure ALOHA
Here ‘G’ is the average number of frames transmitted through the channel during a period of one frame transmission. When the data packet is successfully or completely transmitted to the receiver end, the probability of the data packet is given below,


Maximum Efficiency
To find the maximum efficiency of the station during the data packet transmission

The Efficiency of Pure ALOHA in percentage is 18.4% and is very less due to the number of collisions.

2. Slotted ALOHA:
Slotted ALOHA improves the efficiency by introducing time slots. Devices can only send data at the beginning of each slot. 

This reduces the likelihood of collisions because transmissions are aligned to discrete time intervals.

The throughput of Slotted ALOHA is higher than Pure ALOHA, reaching about 36.8%.

Vulnerable Time of slotted ALOHA
Vulnerable time is the time is the Transmission time of the frame  Tt
The vulnerable time (Vt) = Tt 

Channel Throughput of Slotted ALOHA
Here ‘G’ is the average number of frames transmitted through the channel during a period of one frame transmission. When the data packet is successfully or completely transmitted to the receiver end, the probability of the data packet is given below,

Maximum Efficiency
To find the maximum efficiency of the station during the data packet transmission


Difference between Pure ALOHA AND Slotted ALOHA

Sunday 8 September 2024

DHCP Configuration Using CPT

Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to dynamically assign an IP address to many device, or node, on a network so they can communicate using IP (Internet Protocol). DHCP automates and centrally manages these configurations. There is no need to manually assign IP addresses to new devices. Therefore, there is no requirement for any user configuration to connect to a DHCP based network.

DHCP can be implemented on local networks as well as large enterprise networks. DHCP is the default protocol used by the most routers and networking equipment. DHCP is also called RFC (Request for comments) 

DHCP does the following:
DHCP manages the provision of all the nodes or devices added or dropped from the network.
DHCP maintains the unique IP address of the host using a DHCP server. It sends a request to the DHCP server whenever a client/node/device, which is configured to work with DHCP, connects to a network. The server acknowledges by providing an IP address to the client/node/device.
DHCP is also used to configure the proper subnet mask, default gateway and DNS server information on the node or device.


How DHCP works
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information to the DHCP clients. Information includes subnet mask information, default gateway, IP addresses and domain name system addresses.

DHCP is based on client-server protocol in which servers manage a pool of unique IP addresses, as well as information about client configuration parameters, and assign addresses out of those address pools.

The DHCP lease process works as follows:
First of all, a client (network device) must be connected to the internet.
DHCP clients request an IP address. Typically, client broadcasts a query for this information.
DHCP server responds to the client request by providing IP server address and other configuration information. This configuration information also includes time period, called a lease, for which the allocation is valid.
When refreshing an assignment, a DHCP clients request the same parameters, but the DHCP server may assign a new IP address. This is based on the policies set by the administrator.

Components of DHCP
DHCP Server: DHCP server is a networked device running the DHCP service that holds IP addresses and related configuration information. 

DHCP client: DHCP client is the endpoint that receives configuration information from a DHCP server. This can be any device like computer, laptop, IoT endpoint or anything else that requires connectivity to the network. 

IP address pool: IP address pool is the range of addresses that are available to DHCP clients. IP addresses are typically handed out sequentially from lowest to the highest.

Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep networks manageable.

Lease: Lease is the length of time for which a DHCP client holds the IP address information. When a lease expires, the client has to renew it.

DHCP relay: A host or router that listens for client messages being broadcast on that network and then forwards them to a configured server. The server then sends responses back to the relay agent that passes them along to the client. DHCP relay can be used to centralize DHCP servers instead of having a server on each subnet.

Benefits of DHCP
Centralized administration of IP configuration: DHCP IP configuration information can be stored in a single location and enables that administrator to centrally manage all IP address configuration information.

Dynamic host configuration: DHCP automates the host configuration process and eliminates the need to manually configure individual host. When TCP/IP (Transmission control protocol/Internet protocol) is first deployed or when IP infrastructure changes are required.

DHCP automates the host configuration PC0 process automatically with the following IPs

Finally the output of PC0,PC1,PC2,PC3 dynamically allocated address as shown below:

DNS SERVER & WEB SERVER using CPT

Assign one PC one SWITCH two SERVERS in which one acts as WEBSERVER and other acts as DNS SERVER as shown below with the following IPs


WEBSERVER IP


WEB SERVER HTTP CODE

Program 1:

<html>

<center><font size='+2' color='blue'>Cisco Packet Tracer</font></center>

<h1>Welcome to My Web Page </h1>

<h2>Welcome to My Web Page </h2>

<h3>Welcome to My Web Page </h3>

</html>


Program 2:

<!doctype html>
<html>
<head>
<title> FLAG </title>
</head>
<body bgcolor="white">
<center>
<body >
<hr color="orange" size="80">
<font face="algerian" color="blue" size="8">
<marquee scrolldelay="10" direction="left" behaviour="alternate" >Cisco Packet Trace </marquee>
</font>
<hr color="green" size="80">
</body>
</center>
</html>



DNS SERVER IP


Add the domain name i.e www.mywebsite.com and Default Gateway :192.168.1.1 to the created webpage 

Assign the PC with the following IPs


Now click on PC -> web browser and type www.mywebsite.com and then click on the button go we can visualize the flow of packet from both the servers using Cisco Packet Racer

Wednesday 4 September 2024

Sliding Window Protocol

The sliding window is a technique for sending multiple frames at a time. It controls the data packets between the two devices where reliable and gradual delivery of data frames is needed. It is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence numbers are used to find the missing data in the receiver end. The purpose of the sliding window technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol: Sliding window protocol has two types
1.Go-Back-N ARQ
2.Selective Repeat ARQ

1.Go-Back-N ARQ
The Go-Back-N Automatic Repeat Request protocol is also known as the Go-Back-N ARQ protocol. A sliding window method finds use in this data link layer protocol. In the event of corruption or loss of frames, all subsequent frames must be sent again.
In this protocol, the sender window size is N. The size of the receiver window is always one.

In the event of transmission of a corrupted frame, the receiver cancels it. The receiver does not accept a corrupted frame. The sender sends the correct frame again when the timer expires.
d. Accordingly, the receiver sends the acknowledgement for the 1st frame, and upon receiving that, the sender slides the window again and sends the next frame. This process keeps on happening until all the frames are sent successfully.

Gate Question :
Answer : 16
Answer: 17

2.Selective Repeat ARQ
Selective Repeat ARQ (Selective Repeat Automatic Repeat Request) is another name for Selective Repeat ARQ. A sliding window method is used in this data link layer protocol. If the frame has fewer errors, Go-Back-N ARQ works well. However, if the frame contains a lot of errors, sending the frames again will result in a lot of bandwidth loss. As a result, we employ the Selective Repeat ARQ method. 
The size of the sender window is always equal to the size of the receiver window in this protocol. The sliding window’s size is always greater than 1.
Difference between Stop and Wait, Go-Back-N, and Selective Repeat


OSPF Using CPT

OSPF (Open Shortest Path First)  is a common networking protocol used for routing within an autonomous system and is widely used due to its ...