Case given to them. MMORPGs are the most popular

Case Study Report – World of BCU

 

Online gaming has become one of the largest mainstream
entertainment services ever with the rapid continuous growth of people around
the world competing or just enjoying their time in the online world of the
franchises they love and the ‘World of BCU’ one of those beloved games on the
list. A MMORPG (Massive Multiplayer Online Role-Playing Game) that places the
player in a role of a BCU student undertaking quests to achieve and obtain the
ultimate item called ‘The Degree’. In the online world players collaborate in
real-time together to achieve tasks.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

The online world state is managed and hosted within the
servers in the BCU datacentre located in Birmingham. The discussion of this
study report will revolve around how the network system works behind the scenes
of the game of many people who play constantly every day. To understand the
structure of a network that keeps many players in one world, and to identify
the issues and explore the possibilities of any potential problems that may
affect the network.

 

A MMO is this vast growing player base that offers new
ways to learn, entertain, collaborate, socialize and visualize information, and
do business. This can only happen using networks to connect with different
players and with the world. A reliable network is crucial to ensure the best
possible performance to experience the freedom they have been given to them. MMORPGs
are the most popular out of all MMO game genres.

 

A high level MMO architecture would have gaming clients to
render the game for the user and have gaming servers to interact with the
gaming client. A web application server to integrate with the gaming servers
and clients. A database server to persist and retrieve data. Architecture
components that make up a high-level architecture are the following – Game
Client, Game Server, Web application server and a Database management system.

 

Network Topology of given Scenario

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The client is the player who is now connected to the internet.
The firewall appears when accessing a website or a software client that acts as
a barrier between a network of machines and users that operate under a common security
policy and general trust each other. After that the user is granted access to
the Login servers when information is entered will go through the user database
to find a match. The user goes through a authentication to check whether the
information is correct to fully proceed to the game servers. There multiple
game servers, which if one server is full of players then the client is moved
to another server which has a bit less of people in it. Game Server 1 will
always act as a priority server which the other two will be at standby if
Server 1 is at full capacity. Multi-user instance is there to allow an
operating system of multiple users to connect to a network at once or different
times.

QoS (Quality of Service) Requirements

Quality of Service (QoS) are technical specifications
that specify quality of features such as performance, availability, scalability
and much more of any given scenario. They are driven by business needs
specified in the business requirements, an example for this would be if the
services of a certain game or media application must be available 24 hours a
day throughout the whole year or many years to come. The availability
requirement must address the business requirement. The primary goal of QoS is
to provide priority, including dedicated bandwidth, controlled jitter and
latency (required by some real-time and interactive traffic), and improved loss
characteristics. It is important to assure that providing priority for one or
more flows does not cause the failure of requirements to be guaranteed to the
users. To have a great understanding how fast data can be transferred, how much
the receiver must wait, how correct the received data is likely to be, and how
much data is likely to be lost, etc.

MMO Game
Architecture Requirements

 

The list shows a form of basis for QoS requirements for a
MMO game. These are some of the system qualities affecting the QoS requirements.

 

Scalability
–  Ability to scale seamlessly to allow
any potential number of concurrent users without shutting down the servers, one
server should scale around thousands of players with relative ease. Scalability
usually requires additional recourses, but should not require changes in the
design of the deployment architecture of loss of service due to the time
required add additional resources.

 

Flexibility in
deployment and in game design – To minimize the constraints on game
designers, and facilitate the ability to design an expansive integrated world.
Allowing a wide range of gaming server configurations where it should be
relatively easy and updates that should be transparent to the end user.

 

Low Latency –
Synchronize tens of thousands of clients simultaneously without dropping any
data.

 

Processing –
Process hundreds of thousands of threads simultaneously without any error.

 

Performance and
Persistence – One of the most vital component in smooth, quick and seamless
gameplay that provide performance-related functions, such as game-server load
balancing and monitoring. State of player, game and connection must be
maintained always.

 

Functions –
Provide a means to perform functions for an a MMO game. A feature might be the
ability to persist and retrieve game data, such as player and game-related
data.

 

Redundancy – A
redundancy system for the Server and DB structure for world state restoration.

 

Security – Be
secure and provide the best security to prevent exposure of any security risks
that might potentially harm the end user. Security includes authentication and
authorization for users, security of data, and secure access to a deployed
system.

 

Network Performance

 

Delay: The amount of time data (signal) takes to reach
the destination. Now a higher delay generally means congestion of some sort of
breaking of the communication.

 

Transmission
Delays:

 

 

 

 

 

 

Jitter: Is the variation of delay time. This happens when
a system is not in a deterministic state, for example video streaming suffers
from jitter a lot because the size of data transferred is quite large and hence
no way of saying how long it might take to transfer. Jitter in the network will
more greatly impact collaborative coordination than latency. Higher latencies
with low jitter will still allow collaborators to make reasonable predictions
of how an environment will behave (the overall task performance will decline).
However high jitter reduces predictability and hence collaborators are forced
to employ a purely sequential interaction strategy.

 

There are three major network performance indicators
(latency, throughput and packet loss) that reflect and how they interact with
each other in TCP and UDP traffic streams.

 

·      
Latency is the time required to vehiculate a
packet across a network.

–         
Latency may be measured in many ways: round
trip, one way, etc.

–         
Latency may be impacted by any element in chain
which is used to vehiculate data:

routers, WAN links, local area
networks, workstation, servers etc.

·      
Throughput is defined as the quantity of data
being send/received by unit of time.

·      
Packet loss reflects the number of packets loss
per hundred of packets by a host.

 

UDP Throughput is not impacted by latency, UDP is a
protocol used to carry over IP networks. The rate at which packets can be sent
by the sender is not impacted by the time required to deliver the packets to
the other party (=latency). Whatever that time is, the sender will send a given
number of packets per second depending on other factors such applications,
operating system, resources, …). TCP however is directly impacted by latency
because of the more complex protocol. It integrates a mechanism which checks
all packets are correctly delivered. This mechanism is called
“acknowledgement”, which it consists in having the receiver sending a specific
packet or flag to the sender the proper reception of a packet.

 

TCP and UDP

TCP

TCP stands for “transmission control protocol” and IP stands
for “internet protocol”. Together the form the backbone for almost everything
in the online world, from web browsing to IRC to email, it’s all built on top
of TCP/IP. TCP sockets is a reliable connected based protocol which means
creating a connection between two machines, making exchange of data much like
you’re writing a file on one side, and reading from a file on the other. Why
are TCP connections reliable? All the data send is guaranteed to arrive at
other side and in the order, it is written by the other user. It’s also a
stream protocol, meaning TCP automatically splits the data into packets and
sends them over the network. Even though TCP is connection based and is
guaranteed reliable and ordered that automatically breaks data into packets,
making sure data is not being sent too fast.

UDP

UDP stands for “user datagram protocol” and it’s another
protocol built on top of IP, unlike TCP, instead of adding lost features and
complexity, UDP is a very thin layer over IP. UDP is a connectionless protocol.
UDP is largely used by time sensitive application as well as by servers that
answer small queries from huge number of clients. UDP is compatible with packet
broadcast, sending to all on a network and multicasting where its sending to
all subscribers. UDP is commonly used in Domain Name System, Voice over IP,
Trivial File Transfer Protocol and online games.

Which to use for Game Servers?

For Massive multiplayer online games, developers often must
make an architectural choice between using UDP or TCP persistent connections. The
advantages of TCP are persistent connections, reliability and being able to use
packets of arbitrary sizes. The disadvantages with TCP in this scenario is its
congestion control algorithm, which treats packet loss as an assign of
bandwidth limitations and automatically throttles the sending of packets. If on
3G or WIFI-networks, this can cause a significant latency.

Experienced developer Christoffer Lernö
weighed the pros and cons and recommends the following criteria to choose
whether to use TCP or UDP for your game:

·      
Use HTTP over TCP for making occasional,
client-initiated stateless queries when it’s OK to have an occasional delay.

·      
Use persistent TCP sockets if both client and
server independently send packets but an occasional delay is OK (e.g. Online
Poker, many MMOs).

·      
Use UDP if both clients and server may
independently send packets and occasional lag is not OK (e.g. Most multiplayer
action games, some MMOs).

 

 

 

 

QoS mechanisms

Differentiated services

Differentiated services or DiffServ is a computing networking
architecture that specifies a simple and scalable mechanism for classifying and
managing network traffic and providing a quality of service on modern IP
networks. DiffServ can be used to provide low-latency to critical network traffic
such as voice, streaming services or even in online games while providing
simple best-effort service to non-critical services such as web traffic or file
transfers. DiffServ reduces the burden on network devices and easily scales as
the network grows. It operates on the principle of traffic classification,
where each data packet is placed into a limited number of traffic classes,
rather than differentiating network traffic based on the requirements of an
individual flow.

 

 

 

 

 

Each router on the network is configured to differentiate traffic
based on its class. Each traffic class can be managed differently, ensuring
preferential treatment for higher priority traffic on the network.

DiffServ uses 6 bits in the IP header to specify its values,
called the DCSP (Diffserv code point); the first 6 bits of the TOS field, the
first three of which were formerly used for IP precedence. Differentiated
services has subsumed IP precedence, but maintains backward compatibility. 

Traffic
conditioning: Ensure that traffic entering the DiffServ domain.

Packet
classification: Uses a traffic descriptor to categorize a
packet within a specific group.

Packet
marking: To classify a packet based on a specific traffic descriptor.

Congestion
management: Achieve scheduling and traffic queuing.

Congestion
avoidance: To monitor network traffic loads to avoid congestion at common
network bottlenecks. It may be achieved through packet dropping.

Multicasting Routing Protocols

IP multicast is a method of sending Internet Protocols
datagrams to a group of interested receivers in a single transmission. It is a
form of point-to-multipoint communication employed for streaming media and
other applications on the internet and private networks. IP multicast reserved
multicast address blocks in IPv4 and IPv6.

 

 

 

 

 

 

 

 

 

The most common transport layer protocol to use multicast
addressing is UDP. By its nature UDP is not reliable because message may be
lost or delivered out of order. Reliable multicast protocols such as Pragmatic
General Multicast (PGM) have been developed to add loss detection and
retransmission on top of IP multicast.

Multicast is not a connection-oriented mechanism, so
protocols such as TCP which allows for retransmission of missing packets may
not be appropriate. For applications such as streaming audio and video, the
occasional dropped packet is not a problem, but for distribution for critical
data, a mechanism is required for requesting retransmission.