Friday, May 23

Bittorrent

What Is Bittorrent?

                 BitTorrent is a protocol designed for transferring files. It is peer-to-peer in nature, as users connect to each other directly to send and receive portions of the file. However, there is a central server (called a tracker) which coordinates the action of all such peers. The tracker only manages connections, it does not have any knowledge of the contents of the files being distributed, and therefore a large number of users can be supported with relatively limited tracker bandwidth. The key philosophy of BitTorrent is that users should upload (transmit outbound) at the same time they are downloading (receiving inbound.) In this manner, network bandwidth is utilized as efficiently as possible. 

What Bittorrent Does?

                  When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniques to do this before. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes. 

Bittorrent

Pareto Efficiency

                  Well known economic theories show that systems which are pareto efficient, meaning that no two counter parties can make an exchange and both be happier, tend to have all of the above properties. In computer science terms, seeking Pareto efficiency is a local optimization algorithm in which pairs of counter parties see if they can improve their lot together, and such algorithms tend to lead to global optima. Specifically, if two peers are both getting poor reciprocation for some of the upload they are providing, they can often start uploading to each other instead and both get a better download rate than they had before.

Abstract

              Torrent refers to the small metadata file you receive from the web server (the one that ends in .torrent.) Metadata here means that the file contains information about the data you want to download, not the data itself. This is what is sent to your computer when you click on a download link on a website

Conclusion

             Legitimate P2P use is here and has a definite role to play in the future of the Internet. It is without a compromise between the copyright holders and the file sharers, that there will be an ever-escalating arms race of technology versus legal maneuvers. BitTorrent is a nifty program that works in a simple, if counter-intuitive.

Read More

A Plan For No Spam

 Introduction
 
        Unwanted and irrelevant mass mailings, commonly known as spam are becoming a serious nuisance that if left unchecked may soon be regarded as a Denial of Service Attack against the email infrastructure of the Internet itself.

Best Practices

        The traditional response of the internet to problem uses administrators of deployed protocols is to specify some form of 'Best Practices'. Spam is an attack on the Internet community. The short survey and prosecutions by the FTC and others show that the spam senders are in many cases outright criminals, how then can best practices help? One area in which best practices can provide concrete benefit is in ensuring that the vast majority of Internet users who are acting in good faith do not inadvertently make the problem worse by poorly chosen or poorly coordinated mitigation strategies. 

 
Naive Keyword Inspection  

        Messages are scanned for the presence of words or phrases that occur frequently in spam messages such as HGH or multi-level marketing. This type of filtering is implemented in many common email clients such as Outlook [MSFT].    Keyword Inspection alone is simple to implement but tends to have very high rate of false positives.

Authentication And Authorization

        Practically all spam messages sent today attempt to evade anti-spam measures by use of false header information. None of the spam messages that were examined in the writing of this paper carried a genuine sender address. Most of the massages contained from addresses that were obviously fake. In some cases the addresses were not even valid. Some contained no sender address at all. 

Legislation And Litigation

        The purpose of criminal legislation in a democratic is to deter persons from engaging in prohibited conduct. While it is unlikely that the criminal legislation alone would eliminate spam. Such legislation would certainly create a deterrent for both the spam senders and the advertisers seeking their services. The legislative process is very slow & time consuming. Legislators are reluctant to pass any legislation until they are confident that the implications are fully understood. Legislators will have to be convinced that any new legislation to address the problem of spam will bring benefits that significantly outweigh both the cost of enforcement and the political cost of committing the scarce resource of legislative time to the problem of spam rather than to other pressing problems.

Conclusion

        There are many techniques that address a part of the spam problem. No currently known technique provides a complete solution and it is unlikely that address a part of the problem. No currently known technique provides a complete solution and it is unlikely that any technique will be found in the future that provides a complete and costless solution.


Read More

3D Optical Data Storage

Introduction

                3D Optical Data Storage is the term given to any form of opticaldata storage in which information can be recorded and/or read with three dimensionalresolution (as opposed to the two dimensional resolution afforded, for example, by CD). This innovation has the potential to provide petabyte-level mass storage on DVD-sized disks. Data recording and readback are achieved by focusing lasers within the medium. However, because of the volumetric nature of the data structure, the laser light must travel through other data points before it reaches the point where reading or recording is desired. Therefore, some kind of nonlinearity is required to ensure that these other data points do not interfere with the addressing of the desired point.

Drive Design

        A drive designed to read and write to 3D optical data storage media may have a lot in common with CD/DVD drives, particularly if the form factor and data structure of the media is similar to that of CD or DVD. However, there are a number of notable differences that must be taken into account when designing such a drive.


Destructive Reading

         Since both the reading and the writing of data are carried out with laser beams, there is a potential for the reading process to cause a small amount of writing. In this case, the repeated reading of data may eventually serve to erase it (this also happens in phase change materials used in some DVDs).

Commercial Development

        In addition to the academic research, several companies have been set up to commercialize 3D optical data storage and some large corporations have also shown an interest in the technology. However, it is not yet clear how the technology will perform in the market in the presence of competition from other quarters such as hard drives, flash storage, and holographic storage.

Data Recording During Manufacturing

        Data may also be created in the manufacturing of the media, as is the case with most optical disc formats for commercial data distribution. In this case, the user cannot write to the disc - it is a ROM format. Data may be written by a nonlinear optical method, but in this case the use of very high power lasers is acceptable so media sensitivity becomes less of an issue.

Comparison With Blu-Ray Disc


        Blu-ray Disc (official abbreviation BD) is an optical discstorage medium designed to supersede the DVD format. The disc diameter is 120 mm and disc thickness 1.2 mm plastic optical disc, the same size as DVDs and CDs. Blu-ray Discs contain 25 GB (23.31 GiB) per layer, with dual layer discs (50 GB) being the norm for feature-length video discs. Triple layer discs (100 GB) and quadruple layers (128 GB) are available for BD-XL Blu-ray re-writer drives.


Read More

3d Password

Introduction

        Normally the authentication scheme the user undergoes is particularly very lenient or very strict. Throughout the years authentication has been a very interesting approach. With all the means of technology developing, it can be very easy for 'others' to fabricate or to steal identity or to hack someone’s password. Therefore many algorithms have come up each with an interesting approach toward calculation of a secret key. Therefore we present our idea, the 3D passwords which are more customizable and very interesting way of authentication. Now the passwords are based on the fact of Human memory. Generally simple passwords are set so as to quickly recall them.


Brief Description Of System

        The proposed system is a multi factor authentication scheme. It can combine all existing authentication schemes into a single 3D virtual environment .This 3D virtual environment contains several objects or items with which the user can interact. The user is presented with this 3D virtual environment where the user navigates and interacts with various objects. The sequence of actions and interactions toward the objects inside the 3D environment constructs the user’s 3D password.

Existing System

        Current authentication systems suffer from many weaknesses. Textual passwords are commonly used. Users tend to choose meaningful words from dictionaries, which make textual passwords easy to break and vulnerable to dictionary or brute force attacks. Many available graphical passwords have a password space that is less than or equal to the textual password space. Smart cards or tokens can be stolen.

Well-Studied Attack

         The attacker tries to find the highest probable distribution of 3D passwords. In order to launch such an attack, the attacker has to acquire knowledge of the most probable 3D password distributions. This is very difficult because the attacker has to study all the existing authentication schemes that are used in the 3D environment. It requires a study of the user’s selection of objects for the 3D password. Moreover, a well studied attack is very hard to accomplish since the attacker has to perform a customized attack for every different 3D virtual environment design.

Conclusion


        The 3D password is a multi factor authentication scheme that combines the various authentication schemes into a single 3D virtual environment. The virtual environment can contain any existing authentication scheme or even any upcoming authentication scheme or even any upcoming authentication schemes by adding it as a response to actions performed on an object. Therefore the resulting password space becomes very large compared to any existing authentication schemes.


Read More

Thursday, May 15

Symbian OS

Symbian History
            Symbian OS started life as EPOC - the operating system used for many years in Psion handheld devices. When Symbian was formed in 1998, Psion contributed EPOC into the group. EPOC was renamed Symbian OS and has been progressively updated, incorporating both voice and data telephony technologies of ever greater sophistication with every product release.

 Abstract
              Symbian OS is designed for the mobile phone environment. It addresses constraints of mobile phones by providing a framework to handle low memory situations, a power management model, and a rich software layer implementing industry standards for communications, telephony and data rendering. Even with these abundant features, Symbian OS puts no constraints on the integration of other peripheral hardware. Symbian OS is proven on several platforms. It started life as the operating system for the Psion series of consumer PDA products (including Series 5mx, Revo and netBook), and various adaptations by Diamond, Oregon Scientific and Ericsson.


Product Diversity
        There is an apparent contradiction between software developers who want to develop for just one popular platform and manufacturers who each want to have a range of distinctive and innovative products. The circle can be squared by separating the user interface from the core operating system. Advanced mobile phones or “Smartphones” will come in all sorts of shapes - from traditional designs resembling today’s mobile phones with main input via the phone keypad, to a tablet form factor operated with a stylus, to phones with larger screens and small keyboards.

Introduction
            Small devices come in many shapes and sizes, each addressing distinct target markets that have different requirements. The market segment we are interested in is that of the mobile phone. The primary requirement of this market segment is that all products are great phones. This segment spans voice-centric phones with information capability to information-centric devices with voice capability. These advanced mobile phones integrate fully-featured personal digital assistant (PDA) capabilities with those of a traditional mobile phone in a single unit. 

Basic Principles 
            The cornerstone of Symbian’s modus operandi is to use open – agreed - standards wherever possible. Symbian is focused squarely on one part of the value chain - providing the base operating system for mobile internet devices. This enables manufacturers, networks and application developers to work together on a common platform.

Conclusion 
        Symbian OS is a robust multi-tasking operating system, designed specifically for real-world wireless environments and the constraints of mobile phones (including limited amount of memory). Symbian OS is natively IP-based, with fully integrated communications and messaging.

Read More

Light Fidelity Li-Fi - Advanced Seminar Topics

Abstract
         Whether you’re using wireless internet in a coffee shop, stealing it from the guy next door, or competing for bandwidth at a conference, you’ve probably gotten frustrated at the slow speeds you face when more than one device is tapped into the network. As more and more people and their many devices access wireless internet, clogged airwaves are going to make it increasingly difficult to latch onto a reliable signal. 

What Is Li-Fi?      
        Li-Fi is a VLC, visible light communication, technology developed by a team of scientists including Dr Gordon Povey, Prof. Harald Haas and Dr Mostafa Afgani at the University of Edinburgh. The term Li-Fi was coined by Prof. Haas when he amazed people by streaming high-definition video from a standard LED lamp, at TED Global in July 2011. Li-Fi is now part of the Visible Light Communications (VLC) PAN IEEE 802.15.7 standard.


Why Li-Fi?
  •  There are 1.4 million cellular radio masts deployed worldwide.
  •  There are more than five billion wi-fi devices present.
  •  With all these devices, we transmit more than 600 terabytes of data every month.
How Li-Fi Works?
           Li-Fi is typically implemented using white LED light bulbs at the downlink transmitter.  These devices are normally used for illumination only by applying a constant current.  However, by fast and subtle variations of the current, the optical output can be made to vary at extremely high speeds.

Light For Wireless Communication
            Light is inherently safe and can be used in places where radio frequency communication is often deemed problematic, such as in aircraft cabins or hospitals. So visible light communication not only has the potential to solve the problem of lack of spectrum space, but can also enable novel application. The visible light spectrum is unused, it's not regulated, and can be used for communication at very high speeds.

Conclusion
         The fact that Li-Fi is being considered as one of the IEEE 802.xx standards bodes well for its potential success. Like other 802.xx standards, it is defined only at layers 1 and 2 (physical and media access control (MAC) layers) of the Open Systems Interconnection (OSI) model. Layer 3 and higher layers need to be designed using the Internet Engineering Task Force (IETF) packet transport standards. 

Read More

LWIP

Overview

 As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented   separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. 

Basic Concepts

 From the application's point of view, data handling in the BSD socket API is done in continuous memory regions. This is convenient for the application programmer since manipulation of data in application programs is usually done in such continuous memory chunks. Using this type of mechanism with lwIP would not be advantageous, since lwIP usually handles data in buffers where the data is partitioned into smaller chunks of memory. Thus the data would have to be  copied into a continuous memory area before being passed to the application. This would waste both processing time and memory since. 

UDP Processing

                     
UDP is a simple protocol used for demultiplexing packets between different processes. The state for each UDP session is kept in a PCB structure . The UDP PCBs are kept on a linked list which is searched for a match when a UDP datagram arrives. The UDP PCB structure contains a pointer to the next PCB in the global linked list of UDP PCBs. A UDP session is defined by the IP addresses and port numbers of the end-points and these are stored in the local ip, dest ip, local port and dest port fields. 


 Abstract

LWIP is an implementation of the TCP/IP protocol stack.. Interest for connecting small devices to existing network infrastructure such as global internet is steadily increasing.  

Queuing And Transmitting Data

Data that is to be sent is divided into appropriate sized chunks and given sequence numbers by the tcp enqueue() function. Here, the data is packeted into pbufs and enclosed in a tcp seg struct The TCP header is build in the pbuf, and filled in with all fields except the acknowledgment number, ackno, and the advertised window, wnd. These fields can change during the queuing time of the segment and are therefore set by tcp output() which does the actual transmission of the segment. After the segments are built, they are queued on the unsent list in the PCB. 

 Introduction

Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth  and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry.

Read More

Mobile WiMax - IEEE Seminar Topics

Abstract

          Within the last two decades, communication advances have reshaped the way we live our daily lives. Wireless communications has grown from an obscure, unknown service to an ubiquitous technology that serves almost half of the people on Earth. Whether we know it or not, computers now play a dominant role in our daily activities, and the Internet has completely reoriented the way people work, communicate, play, and learn.However severe the changes in our lifestyle may seem to have been over the past few years, the convergence of wireless with the Internet is about to unleash a change so dramatic that soon wireless ubiquity will become as pervasive as paper and pen.

Introduction

           Broadband wireless sits at the confluence of two of the most remarkable growth stories of the telecommunications industry in recent years. Both wireless and broadband have on their own enjoyed rapid mass-market adoption. Wireless mobile services grew from 11 million subscribers worldwide in 1990 to more than 2 billion in 2005 [4]. During the same period, the Internet grew from being a curious academic tool to having about a billion users.This staggering growth of the Internet is driving demand for higher-speed Internet-access services, leading to a parallel growth in broadband adoption. In less than a decade, broadband subscription worldwide has grown from virtually zero to over 200 million.


 Wimax voip

            A fixed wireless solution not only offers competitive internet access, it can do the same for telephone service thus further bypassing the telephone company's copper wire network. Voice over Internet Protocol (VoIP) offers a wider range of voice services at reduced cost to subscribers and service providers alike. The diagram below illustrates a typical solution where a WiMax service provider can obtain wholesale VoIP services (no need for the WiMax service provider to install and operate a VoIP soft switch) at about $5/number/month and resell to enterprise customers at $50 In residential markets.

Future Scope

        The IEEE 802.16m standard is the core technology for the proposed Mobile WiMax Release 2, which enables more efficient, faster, and more converged data communications. The IEEE 802.16m standard has been submitted to the ITU for IMT-Advanced standardization. IEEE 802.16m is one of the major candidates for IMT-Advanced technologies by ITU. Among many enhancements, IEEE 802.16m systems can provide four times faster data speed than the current Mobile WiMax Release 1 based on IEEE 802.16e technology.

Conclusion

         WiMax offers benefits for wire line operators who want to provide last mile access to residences and businesses, either to reduce costs in their own operating areas, or as a way to enter new markets. 802.16e offers cost reductions to mobile operators who wish to offer broadband IP services in addition to 2G or 3G voice service, and allows operators to enter new markets with competitive services, despite owning disadvantaged spectrum. The capital outlay for WiMAX equipment will be less than for traditional 2G and 3G wireless networks, although the supporting infrastructure of cell sites, civil works, towers and so on will still be needed.

Read More

Motes

Introduction 

Over the last year or so you may have heard about a new computing concept known as motes. This concept is also called smart dust and wireless sensing networks. It seems like just about every issue of Popular Science, Discover and Wired today contains a blurb about some new application of the mote idea. 

Intel Mote Hardware 

The Intel Mote has been designed after a careful study of the application space for sensor networks. We have interviewed a number of researchers in this space and collected their feedback on desired im-provements over currently available mote designs. 

A Typical Mote

MICA mote is a commercially available product that has been used widely by researchers and developers. It has all of the typical features of a mote and therefore can help you understand what this technology makes possible today. MICA motes are available to the general public through a company called Crossbow. These motes come in two form factors: 

      Rectangular, measuring 2.25 x 1.25 by 0.25 inches (5.7 x 3.18 x.64 centimeters), it is sized to fit on top of two AA batteries that provide it with power. 

      Circular, measuring 1.0 by 0.25 inches (2.5 x .64 centimeters), it is sized to fit on top of a 3 volt button cell battery. 


Bluetooth Based Mesh Networks 

Bluetooth was originally designed for personal area networks (PANs) that are quite different from the application that we had in mind. PANs are often simple star network topologies that consist of a sin-gle master and a number of attached slaves. A very simple example would be a BT-enabled cell phone nd wireless headset (a point to point connection consisting of a single master and single slave). A more complex network could involve a PC as the master with mouse, keyboard and printer attached as wireless slaves. Such a network is called a piconet in the BT specification. 

For a number of sensor network applications that use battery powered nodes, low power consumption is essential. In order to support this, the Intel Mote employs a number of power reduction schemas. The overall platform can be brought to a low power state when no active computation of communication is ongoing. In this mode, power consumption is about 1mW or less. In addition, the Bluetooth protocol allows for the radio to enter low power states in-between active communication slots. Special commands allow devices to enter “hold”, “sniff” or “park” modes.

Conclusion

We have described the design of a new enhanced sensor network node, called the Mote. This device provides enhanced CPU, storage and radio facilities that various sensor network application developers and implementers have been asking for.
 
Read More

Next Generation Secure Computing Base NGSCB

New Hardware Components For NGSCB

        The following minimum set of hardware components is required to support the NGSCB architecture and features:

  •  An NGSCB-enabled CPU
  •  An NGSCB-enabled chipset

  • A dedicated SSC that is physically bound to the NGSCB system motherboard

  • Secure input devices, including a keyboard and mouse

Abstract

            The next-generation secure computing base (NGSCB) is an industry-wide initiative that combines computer hardware platform enhancements with trustworthy-computing capabilities and services. NGSCB requires changes to the operating system and hardware. Some scenarios will also require enabling via network infrastructure. While existing programs will continue to work on a computer running NGSCB, they must be rewritten to take advantage of the new security provided by NGSCB. 




 Introduction

          Today's personal computing environment is built on flexible, extensible, and feature-rich platforms that enable consumers to take advantage of a wide variety of devices, applications, and services. Unfortunately, the evolution of shared networks and the Internet has made computers more susceptible to attacks at the hardware, software, and operating system levels. 

Authenticated Operation

               One of the key features of NGSCB is authenticated operation. Trusted applications running in the protected operating environment are identified and authenticated by their code identity, which is computed by the nexus. That code identity is the digest of the application's manifest. The user can define policies that restrict access to sealed secrets based on the application's code identity.

Secure Video Hardware

               Secure video hardware and software work together to ensure that secure windows cannot be obscured, captured by unauthorized software, or altered by unauthorized software. The focus of secure video is protecting the path used to transfer video data from the nexus to the graphics adaptor. A secure graphics adaptor can be integrated in the chipset with a special closed path between it and the nexus. For example, as part of this solution, the graphics adaptor could offer a set of registers at a fixed address, accessible only when the system is running in nexus mode. 

Conclusions

              NGSCB provides a protected run environment for programs, which isolates them from other programs. Each program is protected from software attack, even from the operating system. Unlike conventional authentication models, NGSCB is rooted in software authentication and provides software isolation, secure storage, attestation, and secure I/O operations.

Read More

Free Space Laser Communications


Abstract 

     Laser communications offer a viable alternative to RF communications for inter satellite links and other applications where high-performance links are necessity. 

Introduction

      Lasers have been considered for space communications since their realization in 1960. However, it was soon recognized that, although the laser had potential for the transfer of data at extremely high rates, specific advancements were needed in component performance and systems engineering, particularly for space-qualified hardware.

Features Of Laser Communications System

     A block diagram of typical terminal is illustrated in Fig 1. Information, typically in the form of digital data, is input to data electronics that modulates the transmitting laser source. Direct or indirect modulation techniques may be employed depending on the type of laser employed.


Detector Parameters

           The detector parameters are the type of detector, gain of the detector (if any), quantum efficiency, heterodyne mixing efficiency (for coherent detection only), noise due to the detector, noise due to the following preamplifier, and (for track links) angular sensitivity or slope factor of the detector. For optical ISLs based on semiconductor laser diodes or Nd: YAG lasers, the detector of choice is a p-type-intrinsic-n-type (PIN) or an avalanche photodiode (APD). A PIN photodiode can be operated in the photovoltaic or photoconductive mode, and has no internal gain mechanism. An APD is always operated in the photoconductive mode and has internal gain by virtue of the avalanche multiplication process. At shorter wavelengths (810-900 nm) PINs and APDs made of silicon show the best response, but at longer wavelengths (1300-1550 nm) InGaAs and Ge APDs have significantly more excess noise than comparable silicon devices.

Link Parameters

            The link parameters are the type of laser, wavelength, type of link, and required signal criteria. Although virtually every laser type has been considered at one time of another, today the lasers typically used in free space laser communications system are either semiconductor laser diodes, solid state lasers, or fiber amplifiers/lasers. Laser sources are typically described as operating in either single or multiple longitudinal modes. In single longitudinal mode operation the laser emits radiation at a single frequency, while in multiple longitudinal mode operation multiple frequencies are emitted. Single-mode sources are required in coherent detection systems and typically have spectral widths of the order of 10 kHz-10MHz. 

Conclusions

           The system and component technology necessary for successful intersatellite laser communication link exist today. The growing requirements for efficient and secure communications has led to increased interest in the operational deployment of laser crosslinks for commercial and military satellite systems in both low earth and geosynchronous orbits.


Read More
Home About-us Computer Science Electronics Mechanical Electrical IT Civil
Copyright © 2018 www.seminartopics.org | All Rights Reserved. Design By Templateclue