4g Communication

4G WIRELESS COMMUNICATIONS Anto vinoth. M, Punith Maharishi. Y. R antovinoth. m@gmail. com maharishipunith@yahoo. com Abstract— Mobile communication is continuously one of the hottest areas that are developing at a booming speed, with advanced techniques emerging in all the fields of mobile and wireless communications. With this rapid development it is expected that fourth generation mobile systems will be launched within decades. 4G mobile systems focus on seamlessly integrating the existing wireless technologies. This contrasts with 3G, which merely focuses on developing new standards and hardware. G systems will support comprehensive and personalized services, providing stable system performance and quality service. “4G” doesn’t just define a standard; it describes an environment where radio access methods will be able to interoperate to provide communications sessions that can seamlessly “hand-off” between them. More than any other technology, 4G will have a profound impact on the entire wireless landscape and the total value chain. This paper focuses on the vision of 4G and briefly explains the technologies and features of 4G.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Introduction: Mobile communications and wireless networks are developing at an astounding speed. The approaching 4G (fourth generation) mobile communication systems are projected to solve still-remaining problems of 3G (third generation) systems and to provide a wide variety of new services, from high-quality voice to high-definition video to high-data-rate wireless channel. 4G can be defined as MAGIC—Mobile multimedia, anytime anywhere, Global mobility support, integrated wireless solution, and customized personal service. G is used broadly to include several types of broadband wireless access communication systems along with cellular telephone systems. The 4G systems not only will support the next generation of mobile service, but also will support the fixed wireless networks. The 4G systems will interoperate with 2G and 3G systems, as well as with digital (broadband) broadcasting systems and IP-based one. The 4G infrastructure consists of a set of various networks using IP (Internet protocol) as a common protocol so that users are in control because they will be able to choose every application and environment. G mobile data transmission rates are planned to be up to 20 megabits per second. Evaluation: • Traditionally, wireless systems were considered as an auxiliary approach that was used in regions where it was difficult to build a connection by wire line. • 1G was based on analogy technique and deployed in the 1980s. It built the basic structure of mobile communications and solved many fundamental problems, e. g. cellular architecture adopting, multiplexing frequency band, roaming across domain, non-interrupted communication in mobile circumstances, etc.

Speech chat was the only service of 1G. • 2G was based on digital signal processing techniques and regarded as a revolution from analogy to digital technology, which has gained tremendous success • during 1990s with GSM as the representative. The utilization of SIM (Subscriber Identity Module) cards and • support capabilities for a large number of users were 2G’s main contributions • 2. 5G extended the 2G with data service and packet switching methods, and it was regarded as 3G services for 2G networks. Under the same networks with 2G, 2. G brought the Internet into mobile personal communications. This was a revolutionary concept leading to hybrid communications. • 3G is deploying a new system offer multimedia transmission, global roaming across a cellular or other single type of wireless network, and bit rates ranging from 384 Kbps to several Mbps. Based on intelligent DSP techniques, various multimedia data communications services are transmitted by convergent 3G networks. 3G still leaves some unsolved problems that it does not concern or concerns only partly. The limitations and difficulties of 3G include: Difficulty in continuously increasing bandwidth and high data rate to meet multimedia services requirements, together with the coexistence of different services needing different QOS (Quality of service) and bandwidth. • Limitation of spectrum and its allocation. • Difficult to roam across distinct service environment in different frequency bands. • Lack of end-to-end seamless transport mechanism spanning a mobile sub-network and a fixed one. However, the demand for higher access speed multimedia communication in today’s society and the limitations of 3G communication service wave the path for 4G mobile communication.

Architecture of 4G: One of the most challenging problems facing deployment of 4G technology is how to access several different mobile and wireless networks. There are three possible architectures for 4G. • Multimode devices • Overlay network • Common access protocol. Multimode devices: This architecture uses a single physical terminal with multiple interfaces to access services on different wireless networks. It may improve call completion and expand effective coverage area. It should also provide reliable wireless coverage in case of network, link, or switch failure. The user, device, or network can initiate handoff between networks.

The device itself incorporates most of the additional complexity without requiring wireless network modification or employing interworking devices. Each network can deploy a database that keeps track of user location, device capabilities, network conditions, and user preferences. The handling of quality-of-service (QOS) issues remains an open research question. Overlay network: In this architecture, a user accesses an overlay network consisting of several universal access points. These UAPs in turn select a wireless network based on availability, QOS (Quality of Service) specifications, and user defined choices.

A UAP performs protocol and frequency translation, content adaptation, and QOS negotiation-renegotiation on behalf of users. The overlay network, rather than the user or device, performs handoffs as the user moves from one UAP to another. A UAP stores user, network, and device information, capabilities, and preferences. Because UAPs can keep track of the various resources a caller uses, this architecture supports single billing and subscription. Common access protocol: This protocol becomes viable if wireless networks can support one or two standard access protocols.

One possible solution, which will require inter working between different networks, uses wireless asynchronous transfer mode. To implement wireless ATM, every wireless network must allow transmission of ATM cells with additional headers or wireless ATM cells requiring changes in the wireless networks. One or more types of satellite-based networks might use one protocol while one or more terrestrial wireless networks use another protocol. 4G mobile technologies: a) Open Wireless Architecture (OWA) b) Spectrum-efficient High-speed wireless mobile transmission a) Open Wireless Architecture (OWA):

A single system architecture characterized by a horizontal communication model providing common platform to complement different access technologies in an optimum way for different service requirements and radio environments is called the converged broadband wireless platform or open wireless architecture (OWA). OWA will be the next storm in wireless communications, fueled by many emerging technologies including digital signal processing, software- definable radio, intelligent antennas, superconductor devices, as well as digital transceivers. The open wireless platform requires: Area and power-efficient broadband signal processing for wideband wireless applications • Highest industry channel density (MOPS pooling) in flexible new BTS signal processing architectures • BTS solutions scalable to higher clock rates and higher network capacity • Waveform-specific processors provides new architecture for platform reuse in terminals for multiservice capability • Terminal solutions achieve highest computational efficiency for application with high flexibility • Powerful layered software architecture using virtual machine programming concept.

Depending on the requirements following Open Wireless Platform Architectures are developed. Adaptive Modulation and Coding (AMC): The principle of AMC is to change the modulation and coding format (transport format) in accordance with instantaneous variations in the channel conditions, subject to system restrictions . AMC extends the systems ability to adapt to good channel conditions. Channel conditions should be estimated based on feedback from the receiver .

For a system with AMC, users close to the cell site are typically assigned higher order modulation with higher code rates. On the other hand, users close to the cell boundary are assigned lower order modulation with lower code rates. AMC allows different data rates to be assigned to different users depending on their channel conditions. Adaptive Hybrid ARQ: A successful broadband wireless system must have an efficient co-designed medium access control (MAC) layer for reliable link performance over the lossy wireless channel.

The corresponding MAC is designed so that the TCP/IP layer sees a high quality link that it expects. This is achieved by an automatic retransmission and fragmentation mechanism (ARQ), wherein the transmitter breaks up packets received from higher layers into smaller sub-packets, which are transmitted sequentially. If a sub-packet is received incorrectly, the transmitter is requested to retransmit it. ARQ can be seen as a mechanism for introducing time-diversity into the system due to its capability to recover from noise, interference, and fades.

Hybrid ARQ self-optimizes and adjusts automatically to channel conditions without requiring frequent or highly accurate C/I measurements: 1) adds redundancy only when needed; 2) receiver saves failed transmission attempts to help future decoding; 3) every transmission helps to increase the packet success probability. . Space-Time Coding and MIMO (Multiple-Input-Multiple-Output): Increasing demand for high performance 4G broadband wireless mobile calls for use of multiple antennas at both base station and subscriber ends.

Multiple antenna technologies enable high capacities suited for Internet and multimedia services and also dramatically increase range and reliability. The challenge for wireless broadband access lies in providing a comparable quality of service for similar cost as competing wireline technologies. The target frequency band for this system is 2 to 5 GHz due to favorable propagation characteristics and low radio-frequency (RF) equipment cost. The broadband channel is typically non-LOS channel and includes impairments such as time-selective fading and frequency-selective fading.

Advantages: ( Spatial diversity and coding gains for large link budget gains (>10 dB). ( It Increases data rates due to multiple transmit and receive antennas. ( It Increases base station-to-user capacity. ( Cost is scalable with performance. Disadvantage: Multiple antennas at the transmitter and receiver provide diversity in a fading environment. By employing multiple antennas, multiple spatial channels are created and it is unlikely all the channels will fade simultaneously. OFDM (Orthogonal Frequency Division Multiplexing):

OFDM is chosen over a single carrier solution due to lower complexity of equalizers for high delay spread channels or high data rates. A broadband signal is broken down into multiple narrowband carriers (tones), where each carrier is more robust to multi path. In order to maintain orthogonality amongst tones, a cyclic prefix is added which has length greater than the expected delay spread. With proper coding and interleaving across frequencies, multi path turns into an OFDM system advantage by yielding frequency diversity.

OFDM can be implemented efficiently by using FFT’s at the transmitter and receiver. At the receiver, FFT reduces the channel response into a multiplicative constant on a tone-by-tone basis. Advantage: ( Frequency selectivity caused by multipath improves the rank distribution of the channel matrices across frequency tones, thereby increasing capacity. Open Backbone Network Access Platform: In recent years, access aggregation technologies have been developed that allow a common access and transport network to bear the traffic of subscribers from multiple service providers.

Separating access and transport from service accomplishes two points: • It eliminates the burden of building out an access network, reducing the barrier to entry for new service providers and improving the growth potential for existing service providers. • It promotes technical and business efficiencies for access and transport enterprises due to economies of scale and the ability to resell that access infrastructure to multiple service providers. New systems provide end-to-end direct IP connections for users by extending access aggregation architectures to mobile broadband access.

Network and service providers can leverage existing equipment, tool and content bases to support mobile broadband end users, while the end users experience the best of the wireless and wired worlds. Wireless mobile Internet: Will be the key application of converged broadband wireless system. The terminal will be very smart instead of dumb, compatible with mobile and access services including wireless multicasting as well as wireless trunking. This new wireless terminal will have the following features: • 90 percent of traffic will be data. • The security function will be enhanced (e. g. fingerprint chip embedded). • A voice recognition function will be enhanced; keypad or keyboard attachment will be an option, as will wireless ness. • The terminal will support single and multiple users with various service options. • The terminal will be fully adaptive and software- reconfigurable. b) Spectrum-efficient High-speed wireless mobile transmission: Wide-area wireless broadband systems spectral efficiency can yield a system capacity that allows that experience to be delivered simultaneously to many users in a cell, reducing the cost of service delivery for this mass-market broadband service.

These systems are optimized to exploit the full potential of adaptive antenna signal processing, thereby providing robust, high-speed connections for mobile users with a minimum of radio infrastructure The spectral efficiency of a radio system ( the quantity of billable services that can be delivered in a unit of spectrum ( directly impacts network economics and service quality.

Spectrally efficient systems have the following characteristics: • Reduced spectrum requirements, minimizing up-front capital expenses related to spectrum • Reduced infrastructure requirements, minimizing capital and operating costs associated with base station sites, translating into reduced costs per subscriber and per covered population element • High capacity, maximizing the system throughput and end-user experience even under load The acquisition of spectrum is a key component of the cost structure of wireless systems, and two key features of spectrum have great impact on that cost ( the spectral efficiency of the wireless system and the type of spectrum required to implement the system. A fully capable and commercially viable mobile broadband system can operate in as little as 5 MHz of unpaired spectrum with a total of 20 Mbps throughput per cell in that amount of spectrum. Spectral efficiency measures the ability of a wireless system to deliver information, “billable services,” with a given amount of radio spectrum. In cellular radio systems, spectral efficiency is measured in bits/second/Hertz/cell (bps/Hz/cell).

Factors contribute to the spectral efficiency of a system: • Modulation formats • Air interface overhead (signaling information other than user data) • Multiple access method • Usage model. The quantities just mentioned all contribute to the bits/second/Hertz dimensions of the unit. The appearance of a “per cell” dimension may seem surprising, but the throughput of a particular cell’s base station in a cellular network is almost always substantially less than that of a single cell in isolation. The reason is self-interference generated in the network, requiring the operator to allocate frequencies in blocks that are separated in space by one or more cells.

Open Distributed Ad-Hoc Wireless Networks: Low-powered, ad-hoc mesh architect networks offer spectrally efficient high performance solutions. In such peer-to-peer networks, end-user wireless handsets act as both end terminals and secure wireless routers that are part of the overall network infrastructure. Upstream and downstream transmission “hop” through subscriber handsets and fixed wireless routers to reach network access points or other end terminals. Routing infrastructure, including handsets, utilize intelligent routing capabilities to determine “best path” for each transmission. Routing for “best path” must be defined for “least power”.

That is, network nodes must be able to calculate and update routing tables to send data packets through the paths with minimal power requirements. Therefore, subscriber terminals do not “shout” at a centralized base station, but rather whisper to a near-by terminal that routes the transmission to its destination. Therefore subscriber terminals cooperate, instead of compete for spectrum. Spectrum reuse increases dramatically, while overall battery consumption and RF output within a community of subscribers is reduced. Thus, while the cellular handset can only maintain a 144kbs (for example) link to the base station, the ad hoc mesh device can maintain a multi-megabit link without undue interference. Gbps Packet Transmission: In 4G data networks are packets switched networks and achieved 1Gbps real-time packet transmission in the downlink at the moving speed of about 20km/h in a field experiment on fourth-generation (4G) radio access. The 1Gbps real-time packet transmission was realized through Variable Spreading Factor-Spread Orthogonal Frequency Division Multiplexing (VSF-Spread OFDM) radio access and 4-by-4 Multiple-Input-Multiple-Output (MIMO) multiplexing using “adaptive selection of surviving symbol replica candidate” (ASESS) based on Maximum Likelihood Detection with QR decomposition and the M-algorithm (QRM-MLD), which was developed by DoCoMo.

Frequency spectrum efficiency, which is expressed as information bits per second per Hertz, is 10 bits per second per Hertz, about 20 times that of 3G radio networks’ spectrum efficiency. 4G Features: • High usability: anytime, anywhere, and with any technology. 4G networks are all-IP based heterogeneous networks that allow users to use any system at any time and anywhere. • Support for multimedia services at low transmission cost. To support multimedia services, • High-data-rate services with good system reliability will be provided. At the same time, a • Low per-bit transmission cost will be maintained • Personalization • Integrated services • Entirely packet switched networks. • All network elements are digital. • Higher bandwidth Tight network security. • Providing a technological response to accelerated growth in demand for broadband • Wireless connectivity • Ensuring seamless services provisioning across a multitude of wireless systems and • Networks, from private to public, from indoor to wide area. • Providing optimum delivery of the user’s wanted service via the most appropriate • Network available • Coping with the expected growth in Internet based communications • Opening new spectrum frontiers • 4G networks expected to support real-time multimedia services that are highly time- sensitive Future of 4G: “The future of wireless is not just wireless; it is a part of life. The future offers faster speeds and larger bandwidth. It is suggested that 4G technologies will allow 3D virtual reality and interactive video / hologram images. The technology could also increase interaction between compatible technologies, so that the smart card in the handset could automatically pay for goods in passing a linked payment kiosk (i-mode can already boast this capability) or will tell your car to warm up in the morning, because your phone has noted you have left the house or have set the alarm. 4G is expected to provide high-resolution images (better quality than TV images) and video-links (all of these will require a band width of about 100MHz).

It is likely that the forecasts of the next ‘Killer Apps’ for 4G technology will change as customer demand develops over time. Conclusion: Low cost high speed data will drive forward the fourth generation (4G) as short-range communication emerges. Service and application ubiquity, with a high degree of personalization and synchronization between various user appliances, will be another driver. It is probable that the radio access network will evolve from a centralized architecture to a distributed one. . 4G is likely to enable the download of full length songs or music pieces which may change the market response dramatically. We hope that future generations of wireless networks will provide virtually unlimited opportunities to the global, connected community.

Innovations in network technology Will provide an environment in which virtually anything is available, anywhere, at any time, via any connected device. REFERENCES • T. Zahariadis, and D. Kazakos, “(R)Evolution Toward 4G Mobile Communication Systems,” IEEE Wireless Communications, Volume 10, Issue 4, August 2003. • E. Gustafsson and A. Jonsson, “Always Best Connected,” IEEE Wireless Communications, pp. 49-55, Feb. 2003. • J. Ibrahim, “4G Features,” Bechtel Telecommunications Technical Journal, Volume 1, No. 1, pp. 11-14, Dec. 2002. • W. W. Lu, R. Berezdivin, (Guest Editors) “Technologies on Fourth Generation Mobile Communications,”IEEE Wireless Communications, vol. 9, no. 2, pp. 8-71, Apr. 2002. A Walk in the Clouds Cloud Computing Saiprasad. R.

Bejgam Nitin Kumar Shinde Dept of Computer science, Sir MVIT, Bangalore, India saiprasadrb@gmail. com Nitincs074@gmail. com Abstract Cloud computing promises to increase the velocity with which applications are deployed, increase innovation, and lower costs, all while increasing business agility. The inclusive view of cloud computing that allows it to support every facet, including the server, storage, network, and virtualization technology that drives cloud computing environments to the software that runs in virtual appliances that can be used to assemble applications in minimal time. This white paper discusses how cloud computing ransforms the way we design, build, and deliver applications, and the architectural considerations that enterprises must make when adopting and using cloud computing technology. Keywords- API- Application programming Interface, FTP-File transfer protocol, GPS- Global Positioning Service, Virtualization. Introduction Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server, load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users.

It can be supported by a cloud provider that sets up a platform that includes the OS, with the ability to scale automatically in response to changing workloads. Cloud computing can be the ability to use applications on the Internet that store and protect data while providing a service — anything including email, sales force automation and tax preparation. It can be using a storage cloud to hold application, business, and personal data. And it can be the ability to use a handful of Web services to integrate photos, maps, and GPS information to create a mash up in customer Web browsers. There is an inclusive view that there are many different types of clouds, and many different applications that can be built using them.

To the extent that cloud computing helps to increase the velocity at which applications are deployed, helping to increase the pace of innovation, cloud computing may yet take forms that we still cannot imagine today. As we know about the phrase “The Network is the Computer,” we believe that cloud computing is the next generation of network computing. What distinguishes cloud computing from previous models? Boiled down to a phrase, it’s using information technology as a service over the network. We define it as services that are encapsulated, have an API, and are available over the network. This definition encompasses using both compute and storage resources as services.

Cloud computing is based on the principle of efficiency above all — efficiency that produces high-level tools for handling 80% of use cases so that applications can be created and deployed at an astonishing rate. Cloud computing can be provided using an enterprise datacenter’s own servers, or it can be provided by a cloud provider that takes all of the capital risk of owning the infrastructure. The illusion is that resources are infinite. While the field is in its infancy, the model is taking the information technology (IT) world by storm. The predominant model for cloud computing today is called infrastructure as a service, or IaaS, and because of its prominence, the IaaS model is the focus of this paper.

This paper discusses the nature of cloud computing and how it builds on established trends while transforming the way that enterprises everywhere to build and deploy applications. It proceeds to discuss the architectural considerations that cloud architects must make when designing cloud-based applications The Nature of Cloud Computing Building on established trends Cloud computing builds on established trends for driving the cost out of the delivery of services while increasing the speed and agility with which services are deployed. It shortens the time from sketching out application architecture to actual deployment. Cloud computing incorporates virtualization, on-demand deployment, Internet delivery of services, and open source software.

From one perspective, cloud computing is nothing new because it uses approaches, concepts, and best practices that have already been established. From another perspective, everything is new because cloud computing changes how we invent, develop, deploy, scale, update, maintain, and pay for applications and the infrastructure on which they run. Virtual machines as the standard deployment object Over the last several years, virtual machines have become a standard deployment object. Virtualization further enhances flexibility because it abstracts the hardware to the point where software stacks can be deployed and redeployed without being tied to a specific physical server.

Virtualization enables a dynamic datacenter where servers provide a pool of resources that are harnessed as needed, and where the relationship of applications to compute, storage, and network resources changes dynamically in order to meet both workload and business demands. With application deployment decoupled from server deployment, applications can be deployed and scaled rapidly, without having to first procure physical servers. Virtual machines have become the prevalent abstraction — and unit of deployment — because they are the least-common denominator interface between service providers and developers. Using virtual machines as deployment objects is sufficient for 80 percent of usage, and it helps to satisfy the need to rapidly deploy and scale applications.

Virtual appliances, virtual machines that include software that is partially or fully configured to perform a specific task such as a Web or database server, further enhance the ability to create and deploy applications rapidly. The combination of virtual machines and appliances as standard deployment objects is one of the key features of cloud computing. Compute clouds are usually complemented by storage clouds that provide virtualized storage through APIs that facilitate storing virtual machine images, source files for components such as Web servers, application state data, and general business data. The on-demand, self-service, pay-by-use model The on-demand, self-service, pay-by-use nature of cloud computing is also an extension of established trends.

From an enterprise perspective, the on-demand nature of cloud computing helps to support the performance and capacity aspects of service-level objectives. The self-service nature of cloud computing allows organizations to create elastic environments that expand and contract based on the workload and target performance parameters. And the pay-by-use nature of cloud computing may take the form of equipment leases that guarantee a minimum level of service from a cloud provider. Virtualization is a key feature of this model. IT organizations have understood for years that virtualization allows them to quickly and easily create copies of existing environments —sometimes involving multiple virtual machines — to support test, development, and staging activities.

The cost of these environments is minimal because they can coexist on the same servers as production environments because they use few resources. Likewise, new applications can be developed and deployed in new virtual machines on existing servers, opened up for use on the Internet, and scaled if the application is successful in the marketplace. This lightweight deployment model has already led to a “Darwinist” approach to business development where beta versions of software are made public and the market decides which applications deserve to be scaled and developed further or quietly retired. Cloud computing extends this trend through automation.

Instead of negotiating with an IT organization for resources on which to deploy an application, a compute cloud is a self-service proposition where a credit card can purchase compute cycles, and a Web interface or API is used to create virtual machines and establish network relationships between them. Instead of requiring a long-term contract for services with an IT organization or a service provider, clouds work on a pay-by-use, or pay by- the-sip model where an application may exist to run a job for a few minutes or hours, or it may exist to provide services to customers on a long-term basis. Compute clouds are built as if applications are temporary, and billing is based on resource consumption: CPU hours used, volumes of data moved, or gigabytes of data stored.

The ability to use and pay for only the resources used shifts the risk of how much infrastructure to purchase from the organization developing the application to the cloud provider. It also shifts the responsibility for architectural decisions from application architects to developers. This shift can increase risk, risk that must be managed by enterprises that have processes in place for a reason, and of system, network, and storage architects that needs to factor in to cloud computing designs. Consider this analogy: historically, developer writing software using the Java programming language determines when it’s appropriate to create new threads to allow multiple activities to progress in parallel.

Today, a developer can discover and attach to a service with the same ease, allowing them to scale an application to the point where it might engage thousands of virtual machines in order to accommodate a huge spike in demand. The ability to program application architecture dynamically puts enormous power in the hands of developers with a commensurate amount of responsibility. To use cloud computing most effectively, a developer must also be an architect, and that architect needs to be able to create a self-monitoring and self-expanding application. The developer/architect needs to understand when it’s appropriate to create a new thread versus create a new virtual machine, along with the architectural patterns for how they are interconnected. When this power is well understood and harnessed, the results can be spectacular.

Even large corporations can use cloud computing in ways that solve significant problems in less time and at a lower cost than with traditional enterprise computing. Services are delivered over the network It almost goes without saying that cloud computing extends the existing trend of making services available over the network. Virtually every business organization has recognized the value of Web-based interfaces to their applications, whether they are made available to customers over the Internet, or whether they are internal applications that are made available to authorized employees, partners, suppliers, and consultants. The beauty of Internet-based service delivery, of course, is that applications can be made available anywhere, and at any time.

While enterprises are well aware of the ability to secure communications using Secure Socket Layer (SSL) encryption along with strong authentication, bootstrapping trust in a cloud computing environment requires carefully considering the differences between enterprise computing and cloud computing. When properly architected, Internet service delivery can provide the flexibility and security required by enterprises of all sizes. Cloud computing infrastructure models There are many considerations for cloud computing architects to make when moving from a standard enterprise application deployment model to one based on cloud computing. There are public and private clouds that offer complementary benefits, there are three basic service models to consider, and there is the value of open APIs versus proprietary ones. Public, private, and hybrid clouds

IT organizations can choose to deploy applications on public, private, or hybrid clouds, each of which has its trade-offs. The terms public, private, and hybrid do not dictate location. While public clouds are typically “out there” on the Internet and private clouds are typically located on premises, a private cloud might be hosted at a collocation facility as well. Companies may make a number of considerations with regard to which cloud computing model they choose to employ, and they might use more than one model to solve different problems. An application needed on a temporary basis might be best suited for deployment in a public cloud because it helps to avoid the need to purchase additional equipment to solve a temporary need.

Likewise, a permanent application, or one that has specific requirements on quality of service or location of data, might best be deployed in a private or hybrid cloud. Architectural layers of cloud computing Software as a service (SaaS) Software as a service features a complete application offered as a service on demand. A single instance of the software runs on the cloud and services multiple end users or client organizations. The most widely known example of SaaS is salesforce. com, though many other examples have come to market, including the Google Apps offering of basic business services including email and word processing. Platform as a service (PaaS)

Platform as a service encapsulates a layer of software and provides it as a service that can be used to build higher-level services. There are at least two perspectives on PaaS depending on the perspective of the producer or consumer of the services: • Someone producing PaaS might produce a platform by integrating an OS, middleware, application software, and even a development environment that is then provided to a customer as a service. • Someone using PaaS would see an encapsulated service that is presented to them through an API. The customer interacts with the platform through the API, and the platform does what is necessary to manage and scale itself to provide a given level of service. Virtual appliances can be classified as instances of PaaS.

A content switch appliance, for example, would have all of its component software hidden from the customer, and only an API or GUI for configuring and deploying the service provided to them. Infrastructure as a service (IaaS) Infrastructure as a service delivers basic storage and compute capabilities as standardized services over the network. Servers, storage systems, switches, routers, and other systems are pooled and made available to handle workloads that range from application components to high-performance computing applications. Commercial examples of IaaS include Joyent, whose main product is a line of virtualized servers that provide a highly available on-demand infrastructure. Cloud application programming interfaces

One of the key characteristics that distinguish cloud computing from standard enterprise computing is that the infrastructure itself is programmable. Instead of physically deploying servers, storage, and network resources to support applications, developers specify how the same virtual components are configured and interconnected, including how virtual machine images and application data are stored and retrieved from a storage cloud. They specify how and when components are deployed through an API that is specified by the cloud provider. An analogy is the way in which File Transfer Protocol (FTP) works: FTP servers maintain a control connection with the client that is kept open for the duration of the session.

When files are to be transferred, the control connection is used to provide a source or destination file name to the server, and to negotiate a source and destination port for the file transfer itself. In a sense, a cloud computing API is like an FTP control channel: it is open for the duration of the cloud’s use, and it controls how the cloud is harnessed to provide the end services envisioned by the developer. The use of APIs to control how cloud infrastructure is harnessed has a pitfall: unlike the FTP protocol, cloud APIs are not yet standardized, so each cloud provider has its own specific APIs for managing its services. This is the typical state of an industry in its infancy, where each vendor has its own proprietary technology that tends to lock in customers to their services ecause proprietary APIs make it difficult to change providers. Look for providers that use standard APIs wherever possible. Standard APIs can be used today for access to storage; APIs for deploying and scaling applications are likely to be standardized over time. Also look for cloud providers that understand their own market and provide, for example, ways to archive and deploy libraries of virtual machine images and preconfigured appliances. Cloud computing benefits In order to benefit the most from cloud computing, developers must be able to refactor their applications so that they can best use the architectural and deployment paradigms that cloud computing supports.

The benefits of deploying applications using cloud computing include reducing run time and response time, minimizing the risk of deploying physical infrastructure, lowering the cost of entry, and increasing the pace of innovation. Reduce run time and response time For applications that use the cloud essentially for running batch jobs, cloud computing makes it straightforward to use 1000 servers to accomplish a task in 1/1000 the time that a single server would require. For applications that need to offer good response time to their customers, refactoring applications so that any CPU-intensive tasks are farmed out to ‘worker’ virtual machines can help to optimize response time while scaling on demand to meet customer demands.

The Animoto application is a good example of how the cloud can be used to scale applications and maintain quality of service levels. Minimize infrastructure risk IT organizations can use the cloud to reduce the risk inherent in purchasing physical servers. Will a new application be successful? If so, how many servers are needed and can they be deployed as quickly as the workload increases? If not, will a large investment in servers go to waste? If the application’s success is short-lived, will the IT organization invest in a large amount of infrastructure that is idle most of the time? When pushing an application out to the cloud, scalability and the risk of purchasing too much or too little infrastructure becomes the cloud provider’s issue.

In a growing number of cases, the cloud provider has such a massive amount of infrastructure that it can absorb the growth and workload spikes of individual customers, reducing the financial risk they face. Another way in which cloud computing minimizes infrastructure risk is by enabling surge computing, where an enterprise datacenter (perhaps one that implements a private cloud) augments its ability to handle workload spikes by a design that allows it to send overflow work to a public cloud. Application lifecycle management can be handled better in an environment where resources are no longer scarce, and where resources can be better matched to immediate needs, and at lower cost. Lower cost of entry

There are a number of attributes of cloud computing that help to reduce the cost to enter new markets: • Because infrastructure is rented, not purchased, the cost is controlled, and the capital investment can be zero. In addition to the lower costs of purchasing compute cycles and storage “by the sip,” the massive scale of cloud providers helps to minimize cost, helping to further reduce the cost of entry. • Applications are developed more by assembly than programming. This rapid application development is the norm, helping to reduce the time to market, potentially giving organizations deploying applications in a cloud environment a head start against the competition. Increased pace of innovation

Cloud computing can help to increase the pace of innovation. The low cost of entry to new markets helps to level the playing field, allowing start-up companies to deploy new products quickly and at low cost. This allows small companies to compete more effectively with traditional organizations whose deployment process in enterprise datacenters can be significantly longer. Increased competition helps to increase the pace of innovation — and with many innovations being realized through the use of open source software, the entire industry serves to benefit from the increased pace of innovation that cloud computing promotes. The Future of Cloud Computing

The future for cloud computing is bright. The big names in computers are throwing lots of resources into this. Dell sees a huge market for cloud computing in the future years. HP, Intel and more are throwing resources into this, and it looks like cloud computing might be the next big thing after UMPCs. Networks aren’t ready for mass roll out yet, and connection speeds aren’t yet up to handling this much data. But even Amazon sees a bright future in cloud computing. They have recently released a beta program called Amazon Web Services. The whole idea behind it is resizable computing power. When you need the power, it’s there, but when you don’t, you can scale back.

The bang for the buck with the Amazon program is the highest; it is almost a pay-as-you-go plan for computing cycles. Conclusion Cloud computing is the next big wave in computing. It has many benefits, such as better hardware management, since all the computers are the same and run the same hardware. It also provides for better and easier management of data security, since all the data is located on a central server, so administrators can control who has and doesn’t have access to the files. There are some down sides as well to cloud computing. Peripherals such as printers or scanners might have issues dealing with the fact that there is no hard drive attached to the physical, local machine.

If there are machines a user uses at work that aren’t their own for any reason, that require access to particular drivers or programs, it is still a struggle to get this application to know that it should be available to the user. If you’re looking to implement this, you have two options. You can host it all within your network, or you can use a device from a company that provides the server storage. I hope you have learned a lot about cloud computing and the bright future it has in the coming years. References 1] http://en. wikipedia. org/wiki/Cloud_computing 2] http://aws. amazon. com/ec2/ 3] http://www. smallbusinesscomputing. com/biztools/article. php/3809726 4] http://www. pcworld. om/businesscenter/article/149892/google_apps_admins_jittery_about_gmail_hopeful_about_future. html ARTIFICIAL INTELLIGENCE RAVI KR. SINHA RUPSHI ravi_sinha0506@yahoo. com rupshissec@gmail. com Abstract Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations.

Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporate everything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures instandard office environments. INTRODUCTION Artificial Intelligence is concerned with the design of intelligence in an artificial device. The term was coined by McCarthy in 1956. There are two ideas in the definition. 1. Intelligence 2. artificial device What is intelligence?

Is it that which characterize humans? Or is there an absolute standard of judgement? Accordingly there are two possibilities: – A system with intelligence is expected to behave as intelligently as a human – A system with intelligence is expected to behave in the best possible manner – Secondly what type of behavior are we talking about? – Are we looking at the thought process or reasoning ability of the system? – Or are we only interested in the final manifestations of the system in terms of its actions? Given this scenario different interpretations have been used by different researchers as defining the scope and view of Artificial Intelligence. 1.

One view is that artificial intelligence is about designing systems that are as intelligent as humans. This view involves trying to understand human thought and an effort to build machines that emulate the human thought process. This view is the cognitive science approach to AI. 2. The second approach is best embodied by the concept of the Turing Test. Turing held that in future computers can be programmed to acquire abilities rivaling human intelligence. As part of his argument Turing put forward the idea of an ‘imitation game’, in which a human being and a computer would be interrogated under conditions where the interrogator would not know which was which, the communication being entirely by textual messages.

Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent. Turing’s ‘imitation game’ is now usually called ‘the Turing test’ for intelligence. Turing Test Consider the following setting. There are two rooms, A and B. One of the rooms contains a computer. The other contains a human. The interrogator is outside and does not know which one is a computer. He can ask questions through a teletype and receives answers from both A and B. The interrogator needs to identify whether A or B are humans. To pass the Turing test, the machine has to fool the interrogator into believing that it is human. For more details on the Turing test visit the site http://cogsci. ucsd. edu/~asaygin/tt/ttest. html 3.

Logic and laws of thought deals with studies of ideal or rational thought process and inference. The emphasis in this case is on the inferencing mechanism, and its properties. That is how the system arrives at a conclusion, or the reasoning behind its selection of actions is very important in this point of view. The soundness and completeness of the inference mechanisms are important here. 4. The fourth view of AI is that it is the study of rational agents. This view deals with building machines that act rationally. The focus is on how the system acts and performs, and not so much on the reasoning process. A rational agent is one that acts rationally, that is, is in the best possible manner. Typical AI problems

While studying the typical range of tasks that we might expect an “intelligent entity” to perform, we need to consider both “common-place” tasks as well as expert tasks. Examples of common-place tasks include – Recognizing people, objects. – Communicating (through natural language). – Navigating around obstacles on the streets These tasks are done matter of factly and routinely by people and some other animals. Expert tasks include: • Medical diagnosis. • Mathematical problem solving • Playing games like chess These tasks cannot be done by all people, and can only be performed by skilled specialists. Now, which of these tasks are easy and which ones are hard?

Clearly tasks of the first type are easy for humans to perform, and almost all are able to master them. The second range of tasks requires skill development and/or intelligence and only some specialists can perform them well. However, when we look at what computer systems have been able to achieve to date, we see that their achievements include performing sophisticated tasks like medical diagnosis, performing symbolic integration, proving theorems and playing chess. On the other hand it has proved to be very hard to make computer systems perform many routine tasks that all humans and a lot of animals can do. Examples of such tasks include navigating our way without running into things, catching prey and avoiding predators.

Humans and animals are also capable of interpreting complex sensory information. We are able to recognize objects and people from the visual image that we receive. We are also able to perform complex social functions. Approaches to AI Strong AI aims to build machines that can truly reason and solve problems. These machines should be self aware and their overall intellectual ability needs to be indistinguishable from that of a human being. Excessive optimism in the 1950s and 1960s concerning strong AI has given way to an appreciation of the extreme difficulty of the problem. Strong AI maintains that suitably programmed machines are capable of cognitive mental states.

Weak AI: deals with the creation of some form of computer-based artificial intelligence that cannot truly reason and solve problems, but can act as if it were intelligent. Weak AI holds that suitably programmed machines can simulate human cognition. Applied AI: aims to produce commercially viable “smart” systems such as, for example, a security system that is able to recognise the faces of people who are permitted to enter a particular building. Applied AI has already enjoyed considerable success. Cognitive AI: computers are used to test theories about how the human mind works–for example, theories about how we recognise faces and other objects, or about how we solve abstract problems. Limits of AI Today

Today’s successful AI systems operate in well-defined domains and employ narrow, specialized knowledge. Common sense knowledge is needed to function in complex, open-ended worlds. Such a system also needs to understand unconstrained natural language. However these capabilities are not yet fully present in today’s intelligent systems. What can AI systems do Today’s AI systems have been able to achieve limited success in some of these tasks. • In Computer vision, the systems are capable of face recognition • In Robotics, we have been able to make vehicles that are mostly autonomous. • In Natural language processing, we have systems that are capable of simple machine translation. Today’s Expert systems can carry out medical diagnosis in a narrow domain • Speech understanding systems are capable of recognizing several thousand words continuous speech • Planning and scheduling systems had been employed in scheduling experiments with the Hubble Telescope. • The Learning systems are capable of doing text categorization into about a 1000 topics • In Games, AI systems can play at the Grand Master level in chess (world champion), checkers, etc. Intelligent behaviour This discussion brings us back to the question of what constitutes intelligent behaviour. Some of these tasks and applications are: • Perception involving image recognition and computer vision • Reasoning • Learning Understanding language involving natural language processing, speech processing • Solving problems • Robotics Practical Impact of AI AI components are embedded in numerous devices e. g. in copy machines for automatic correction of operation for copy quality improvement. AI systems are in everyday use for identifying credit card fraud, for advising doctors, for recognizing speech and in helping complex planning tasks. Then there are intelligent tutoring systems that provide students with personalized attention. Thus AI has increased understanding of the nature of intelligence and found many applications. It has helped in the understanding of human reasoning, and of the nature of intelligence.

It has also helped us understand the complexity of modeling human reasoning. We will now look at a few famous AI system. 1. ALVINN: Autonomous Land Vehicle In a Neural Network In 1989, Dean Pomerleau at CMU created ALVINN. This is a system which learns to control vehicles by watching a person drive. It contains a neural network whose input is a 30×32 unit two dimensional camera image. The output layer is a representation of the direction the vehicle should travel. The system drove a car from the East Coast of USA to the west coast, a total of about 2850 miles. Out of this about 50 miles were driven by a human, and the rest solely by the system. 2. Deep Blue

In 1997, the Deep Blue chess program created by IBM, beat the current world chess champion, Gary Kasparov. 3. Machine translation A system capable of translations between people speaking different languages will be a remarkable achievement of enormous economic and cultural benefit. Machine translation is one of the important fields of endeavour in AI. While some translating systems have been developed, there is a lot of scope for improvement in translation quality. 4. Autonomous agents In space exploration, robotic space probes autonomously monitor their surroundings, make decisions and act to achieve their goals. NASA’s Mars rovers successfully completed their primary three-month missions in April, 2004.

The Spirit rover had been exploring a range of Martian hills that took two months to reach. It is finding curiously eroded rocks that may be new pieces to the puzzle of the region’s past. Spirit’s twin, Opportunity, had been examining exposed rock layers inside a crater. 5. Internet agents The explosive growth of the internet has also led to growing interest in internet agents to monitor users’ tasks, seek needed information, and to learn which information is most useful What can AI systems NOT do yet? • Understand natural language robustly (e. g. , read and understand articles in a newspaper) • Surf the web • Interpret an arbitrary visual scene Learn a natural language • Construct plans in dynamic real-time domains • Exhibit true autonomy and intelligence AI History Intellectual roots of AI date back to the early studies of the nature of knowledge and reasoning. The dream of making a computer imitate humans also has a very early history. The concept of intelligent machines is found in Greek mythology. There is a story in the 8th century A. D about Pygmalion Olio, the legendary king of Cyprus. He fell in love with an ivory statue he made to represent his ideal woman. The king prayed to the goddess Aphrodite, and the goddess miraculously brought the statue to life. Other myths involve human-like artifacts.

As a present from Zeus to Europa, Hephaestus created Talos, a huge robot. Talos was made of bronze and his duty was to patrol the beaches of Crete. Aristotle (384-322 BC) developed an informal system of syllogistic logic, which is the basis of the first formal deductive reasoning system. Early in the 17th century, Descartes proposed that bodies of animals are nothing more than complex machines. Pascal in 1642 made the first mechanical digital calculating machine. In the 19th century, George Boole developed a binary algebra representing (some) “laws of thought. ” Charles Babbage & Ada Byron worked on programmable mechanical calculating machines.

In the late 19th century and early 20th century, mathematical philosophers like Gottlob Frege, Bertram Russell, Alfred North Whitehead, and Kurt Godel built on Boole’s initial logic concepts to develop mathematical representations of logic problems. The advent of electronic computers provided a revolutionary advance in the ability to study intelligence. In 1943 McCulloch & Pitts developed a Boolean circuit model of brain. They wrote the paper “A Logical Calculus of Ideas Immanent in Nervous Activity”, which explained how it is possible for neural networks to compute. The 1990’s saw major advances in all areas of AI including the following: • machine learning, data mining • intelligent tutoring, • case-based reasoning, • multi-agent planning, scheduling, • uncertain reasoning, • natural language understanding and translation, • vision, virtual reality, games, and other topics.

References Artificial Intelligence A Modern Approach – — Stuart Russell and Peter Norvig Principles of Artificial Intelligence — N J Nilsson An Architecture for Exporting Environment Awareness to Mobile Computing Applications soumyashreev. v & vijayalakshmi. g Abstract In mobile computing, factors such as add-on hardware components and heterogeneous networks result in an environment of changing resource constraints. An application in such a constrained environment must adapt to these changes so that available resources are properly utilized. We propose an architecture for exporting awareness of the mobile computing environment to an application.

In this architecture, a change in the environment is modeled as an asynchronous event that includes information related to the change. Events are typed and are organized as an extensible class hierarchy so that they can be handled at different levels of abstraction according to the requirement of each application. We also compare two approaches to structure an adaptive application. One addresses the problem of incorporating adaptiveness into legacy applications, while the other considers the design of an application with adaptiveness in mind. Index Terms—Mobile computing, resource constraints, environment awareness, adaptive application architectures, event delivery framework. 1 INTRODUCTION MOBILE computing is associated with an environment of constrained resources.

Although these constraints are becoming less noticeable, the portability of a mobile computer will always induce constraints, when compared to nonmobile computers. For instance, battery powered mobile computers will always face power constraints relative to their fixed counterparts. Since current technology [1] also allows hardware components to be added or removed while a mobile computer is still powered on, an element of dynamicity is introduced to the constrained mobile computing environment. In such an environment, the system must adapt to appropriately utilize available resources. A mobile computing system must also deal with dynamic network connectivity caused by heterogeneous network technologies.

For example, fast connectivity of wired networks or wireless networks such as WaveLAN [2] may be available indoors, while slower cellular or CDPD [3] connectivity may be available outdoors. Although network and transport protocols for mobile hosts [4], [5] can transparently maintain network connectivity across these technologies, they are mostly tuned to adapt and recover from transient changes in network conditions. These protocols are inadequate to handle the long-term changes in network parameters that characterize connections to a mobile host. A robust mobile computing system must complement these protocols by adapting to long-term network changes and periods of temporary disconnection. In current systems, resources are managed almost exclusively by the underlying operating system.

This is justified because resources are shared among competing applications belonging to different users. Since changes in resource availability are uncommon, system resource management is usually a simple call-admission process. After acquiring a resource, an application assumes its availability until the resource is no longer required. Any spurious unavailability of the resource is exported to the application as a failure, similar to that caused by a failed call-admission. Furthermore, an acquired resource is never explicitly revoked by the operating system. The application therefore requires little built-in adaptation, being effectively unaware of changes to resource availability.

A mobile computer, however, is typically dedicated to a single user, who owns all applications in the system. The user of such a computer usually focuses on a few applications, implicitly defining a priority amongst them. Although resource allocation can be left to the operating system, we believe that better utilization is possible if mobile computing applications participate. An application can contribute to system resource allocation by conservatively utilizing resources according to both, their availability, and the implicit priority of the application as defined by the user. For instance, on a low-on-battery condition, an application can disable a graphical user interface, preferring a text based one.

This change of user interface may consume less processing power, allowing the processor to operate in a low-power mode. The application has implicitly contributed towards power allocation in the system. Another application could buffer outgoing mail messages during periods of intermittent network connectivity, flushing the mail send queue when a network with sufficient bandwidth is detected. This application has contributed to the conservative utilization of network resources during periods of scarce network bandwidth. Still another application can use the iconized state of its display window as a hint to inhibit network activity. No scheduling of network activity by the operating system can perform better than such voluntary restraint.

In general, a mobile computing application must dynamically upgrade its quality of service when a resource becomes available, and gracefully degrade when the quality of a resource deteriorates or a resource becomes unavailable. In order to do so, the application must be: 1)_ sufficiently general so that alternate resource availability situations can be handled, 2)_ aware of current resource availability, and 3)_ structured so that functionality and resource usage is altered according to application requirement and current resource availability The mobile computing system must, therefore, export awareness of the resource environment to an application. Important components of the mobile computing environment that must be considered include the battery, memory, disk, network, and the CPU. Although current operating systems are capable of ecognizing changes in resource availability, we believe the abstractions for informing an application of the induced changes are inadequate for mobile computing. In order to address this inadequacy, we propose a new approach to make an application aware of environmental changes. The architecture is based on an event delivery mechanism over which typed events can be delivered to mobile computing applications. Event types can

x

Hi!
I'm Petra

Would you like to get such a paper? How about receiving a customized one?

Check it out