Executive Summary

Executive Summary

Businesses have significantly gained from cloud computing technologies, including services such as cloud storage, file sharing and collaboration, email, “Green IT”, efficiency and productivity, web applications, document management, offsite backups, and IT scalability and business expansion. Small and medium businesses have enjoyed benefits related to productivity, cost savings, general organization and structure and a platform to venture into more locations. On the other hand, large businesses have taken advantage of the financial power to distribute their computing resources between on-premise data centers and the public cloud. Traditionally, businesses were required to invest heavily in IT infrastructure and specialists to drive their daily operations, but cloud computing has come as a solution to eliminate huge IT capital expenditure and expertise.

Cloud computing can be traced back to the 1950s when thin clients were developed to access a single mainframe computer, which acted as the server.  In the 1960’s ARPANET was launched from where today’s internet was formed. Later on the web saw great advancements from Web 1.0 and upon successful launching of Web 2.0 in 2009 much of the cloud computing capabilities were brought to life. Today’s cloud computing can be deployed in 3 major models private cloud, public cloud or hybrid cloud. However, the future of data centers will see enhancements inclined towards “Green IT”, server optimization and efficiency, introduction of flash memory for caching and quick access to critical data, security and artificial intelligence.  

 Beside cons such as security threats, vendor-lock in, over-dependence on the internet connection, and outages and downtime, cloud computing is a critical computing solution across the corporate world. It is the tool to watch in business transformation today and in the future.

1.0 Introduction. 3

1.1 Cloud computing and businesses. 3

1.2 Why consider moving to the cloud. 3

2.0 History of the cloud. 7

2.1 How the cloud came to be today. 7

2.2 Public, private and hybrid clouds. 10

2.3 The future of data centers. 13

3.0 Cases of businesses moving their I.T. systems to the cloud. 16

3.1 Small business. 17

3.1.1 Pros. 17

3.1.2 Cons. 19

3.3.3 Estimate cost. 20

3.2 Medium business. 21

3.2.1 Pros. 21

3.2.2 Cons. 23

3.2.3 Estimate cost. 24

3.3 Large business. 24

3.3.1 Pros. 24

3.3.2 Cons. 26

3.3.3 Estimate cost. 27

4.0 Conclusion. 27

5.0 References. 29

1.0 Introduction

1.1 Cloud computing and businesses

Cloud computing has emerged as a critical attribute towards establishing a competitive platform through superior performance in the day-to-day business processes. These days, it appears like everything revolves around the cloud: servers, services and applications are being migrated to the cloud in addition to data storage and other computing elements (Dutta, Guo & Choudhary, 2013).

Cloud computing refers to provision of IT infrastructure (hardware and software) as a service on-demand basis and in a greatly elastic and multitenant or shared environment. Server and storage virtualization are the two major approaches used to achieve the required abstraction of the principal computing infrastructure, which eventually creates a cloud. Personnel involved in the deployment of cloud computing projects need to consider the relationship between business needs and cloud infrastructure models (Babar & Chauhan, 2011). Typically, there are three fundamental cloud infrastructure models, namely: private, public and hybrid (a combination of public and private clouds) clouds to choose from (Stieninger & Nedbal, 2014).

1.2 Why consider moving to the cloud

The ultimate decision to migrate to the cloud is typically triggered by the need to initiate a change or adjust to a situation or incident. However, for maximum or tangible benefits, the right decision must always be sought. What are the reasons for a worthwhile decision for a business to move to the cloud today? The following are the reasons as to why businesses today should consider moving to the cloud (Alali & Chia-Lun, 2012; Molen, 2012):

  • Most IT equipment are associated with an approaching end of life: servers and other critical IT equipment may be getting old or have been pushed to their limit necessitating remediation. Today, the corporate world is experiencing a high rate IT evolution that has placed businesses in a situation that demands either replacement of old IT equipment with new ones or migration to the cloud. Failure to take action places a business at the risk of unexpected crashes and downtime that would disrupt business processes. Investing in new equipment is one of the options, but moving to the cloud is the better option since it eliminates the need to purchase new hardware and thus enables cost saving. With cloud computing, a business can avoid damaging downtime or disruption caused by crashing of critical IT equipment such as a server, while decreasing costs effectively.  In addition, most cloud service providers offer hardware upgrading as part of their cloud computing packages, therefore eliminating risks associated with looming end of life.
  • To run energy-efficient IT as a way of addressing environmental challenges: when business migrate their servers and applications to the cloud, they reduce energy usage and carbon footprint by approximately 30%. The number of damaged hardware which constitute electronic waste is also considerably reduced because the aggregate number of IT equipment used is relatively less with cloud computing. There are a number of factors that enable the cloud to be energy-efficient and highly carbon neutral, including dynamic provisioning to reduce wasted IT resources, appropriate server utilization, data center designs that have reduced power loss.
  • To kick-start the operations of a start-up business or boost cash flow for an existing business: generally, start-ups relentlessly pursue cost saving exploits, and cloud computing is an ideal option.  Start-ups are provided with a platform to start operating without huge IT investments and to deploy their IT infrastructure faster. Cost savings realized may be redirected towards other areas, for example sales and marketing to boost the bottom line of a business. Typically, cloud services are subscription-oriented implying there is little or no need for substantial capital expenditure. In addition, it is possible to predict continuous cloud subscription costs which enable a business to devise appropriate cost-reduction mechanisms.
  • To address seasonal changes in the market that may affect the communication network capacity: seasonal fluctuations in the demand for goods and services affect business’s IT capability to efficiently handle the market changes. The high demand for products during holiday seasons such as the Christmas strains network capacity, but it is followed by low-demand for products in January. Movements to and from the cloud are achieved through subscribing and unsubscribing from cloud services, therefore allowing for affordable, efficient and effective scaling up or down to address seasonal fluctuations. On-demand IT services provides businesses with an opportunity to control their IT infrastructure as needed. As such, the cloud system provides a better approach to investment protection.
  • To support the operations of an increasingly growing business: it is good news for a business to gain growth with time, but it demands technological requirements to keep pace with business growth and sustain it. Cloud computing provides the flexibility and seamlessness required in the IT system to handle a business that is growing. Cloud computing provides a solution to easily and efficiently add systems functionality, branches, clients, vendors, partners and employees as needed. As a matter of fact, it may be extra expensive in terms to time, human resources and costs to build an internal IT infrastructure compared to outsourcing equivalent or even superior IT services from cloud providers. Cloud computing allows a business to focus on continuous and sustainable growth and increased profits without worrying about the state of its IT systems as they are taken care of by contracted cloud providers. Expanding to new locations simply requires a business to have a reliable internet connection to help access cloud services, thus mitigating risks and costs related to considerable expansion of internal IT infrastructure.
  • To search for sustainable competitiveness: the cloud matters a lot to businesses because its strategic significance is notable in delivering efficiency and even far more. The cloud helps businesses gain competitiveness through better decisions, business reinvention and greater collaboration. Increased efficiency in service delivery is a recipe for greater levels of customer satisfaction and eventually profitability, growth and competitiveness. Forward-thinking businesses have used the cloud to redefine their strategies with the cloud to grow faster than rivals. Cloud computing has helped businesses boost their capability to exploit their IT systems in establishing a solid analytics platform as well as integration with third-party applications such as social media sites for advertising. The cloud is a pacesetter in the way it offers better, smarter and more connection to applications, data and services than never before. Therefore, businesses are set to gain a competitive edge with proper deployment of the cloud by allowing stakeholders to access  IT resources anytime anywhere – whether within or off business premises, and using almost any computing device.
  • Reducing the time and energy spent by IT staff on managing desktop and server devices, thus freeing up time for other strategic projects. To support remotely working staff: some workers execute their duties away from their office desk, either from their homes or on the move. Others travel regularly and their input in the day-to-day business processes is greatly needed. For these remote access to business IT hardware and software, the cloud has emerged as an efficient and secure solution. With a mobile device such as a laptop, smartphone or tablet and internet connection, staffs can successfully conduct their personal duties and work on collaborative projects with team members that may be hundreds of miles away. Therefore, the cloud is an integral element of giving access to a pool of IT resources to people located in different locations.

Therefore, it is worthwhile to consider moving IT resources to the cloud since it is evident that it positively impacts on the performance of businesses.  This thesis covers considerations and perspectives with respect to migrating business IT infrastructure to the cloud by discussing how the cloud came to be today, public, private and hybrid clouds, the future of data centers, and cases of small, medium and large businesses to assess the pros, cons and cost estimates and allow for a smart and well thought out decision in the implementation of cloud computing.

2.0 History of the cloud

2.1 How the cloud came to be today

Cloud computing seems like it is a relatively new technology, but it is a creation of several distributed technologies. Success in server, storage and network virtualization across IT equipment in a unified data center laid the foundation for today’s cloud. Auto scaling and dynamic provisioning were critical techniques to the emergence of corporate clouds. By then, it was referred to as utility computing and Amazon was the only company that was using cloud computing to manage its internal IT resources. Amazon developed an API that was used by many customers and developers to implement and operate IT resources in the cloud. In 2006, Amazon created the Amazon Web Service (AWS) and Elastic Compute World (EC2) to offer cloud computing services to its customers based on utility computing – computation, storage, human intelligence and networking. The cloud architecture has been significantly improved in efficiency and other features such as power efficiency and auto-provisioning have been added over time. Therefore, Amazon is phenomenal in development of the cloud (Nikos & Gillam, 2010).

Virtualization technology also emerged as a key factor towards the growth of cloud computing. Development of the concept of virtual machines by IBM in 1970s allowed many unique computers to run in a single processing environment. Each virtual machine would have its processor, memory and other computing components. Network solutions were later added as it emerged that it was possible to add more virtual components without the need to increase hardware equipment. Provisioning was born to appropriately shift traffic and balance the load and bandwidth on the network for better services (Smoot & Tan, 2012).

Cloud computing evolved from utility and grid computing and the overall pursuit to deliver IT through globally interconnected devices in the 1960s.  The whole idea can be traced back to the work of J.C.R. Licklider in 1969 when Advanced Research Projects Agency Network (ARPANET) initiative was brought to life. Licklider’s vision was to interconnect everyone in the world and provide access to applications and data from anywhere anytime. According to Smoot & Tan (2012), the vision for a globally interconnected network sounds almost like what is today’s cloud computing. Delivery of computing in form of a public utility in a manner analogous to service bureaus and electricity grid contributed to cloud computing. In the 1990s, the internet was advanced to provide high-speed bandwidth, which enabled consumers to access more cloud services than ever before.  Advancements in internet capabilities widened the reach and scope of cloud computing and almost everything could now be delivered on the cloud.

The emergence of Web 2.0 technologies propelled the cloud to the masses globally (Alali & Chia-Lun, 2012). Since the official commercial use of the internet, the web saw tremendous growth with respect to infrastructure and application areas, especially in the 1990s. As a result, a naming approach was assumed to mark different stages in the evolution and is iterated into Web 1.0, Web 2.0 and Web 3.0 and much more is expected in future advancements. Web 2.0 is the key standard that triggered a breakthrough in use of the World Wide Web (www) in the provision of cloud-based application, data and services in 2009. Web 2.0 provided the capability to devise sites that performed better than traditional static sites. Sites based on Web 2.0 provided the dynamism required to allows users to actively interact with them. With Web 2.0, users could now generate content in a virtual environment as opposed to Web 1.0 sites where users were passive viewers of content. Web 2.0 propelled the web to be a host to millions of hosted services, social networking sites, music and video sharing sites, mashups, embedded advertisement posts, and web-based applications.  These are instances of Web 2.0 applications that provide a collaborative or shared platform, a key enabler of provision of cloud-based applications and services. The emergence of Salesforce.com, which has been greatly enhanced with use of Web 2.0 technologies is a major milestone in the history of cloud computing because it pioneered delivery of enterprise applications and services through a mere website. Salesforce.com opened up the way for mainstream software companies to continually use the web for application delivery. “Killer” apps from technology leaders Google, Apple and Microsoft have also greatly contributed to cloud computing (Molen, 2012). These companies provide efficient and reliable software and services that users find easy to subscribe into and consume, resulting into a whole set of online applications and services which are widely accepted.

However, we cannot ignore that mainframes allowed users access to a single computer via dumb terminals, which are called thin clients today. This enabled cut costs because one mainframe could serve many users over shared access. Dedicated and inflexible point-to-point physical connections were replaced by virtual private network links to provide shared link access to physical and virtual infrastructure. Computing became affordable with the boom in personal computers in the 80s and 90s pioneered by IBM, Compaq, Dell and Apple. In addition, Microsoft pushed its Windows operating system to the world of PCs. Increased software interoperability has also enabled connected computing than ever before.  Today, cloud computing sits on the solid internet backbone and is an integral element in IT solutions for personal and business applications through three major service models, including   Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). IaaS offers computing resources – servers, storage, networking and data center facility on the basis of pay-per-use. PaaS offers a cloud-oriented platform with all requirements for supporting the entire lifecycle of designing and building of web-based applications, eliminating the complexity, time and cost incurred in the purchase and management of the core underlying hardware and software as well as hosting and provisioning  (Nikos & Gillam, 2010).

2.2 Public, private and hybrid clouds         

Cloud computing is deployed in different ways based on a number of factors, including security and privacy requirements, location where cloud services are to be hosted, the degree to which resources need to be shared and customization capabilities. Typically, there are three cloud deployment models, namely the public cloud, private and hybrid cloud.

According to Budrienė & Zalieckaitė (2012), public clouds are offered by dedicated third-party vendors to users on a subscription basis in order to use the IT resources they provide. However, the ownership and operation of public clouds entirely rests with the cloud vendor. Therefore, the public cloud model does not require users to buy hardware, software, supporting infrastructure or any future infrastructure needs.  Virtual machines, servers, storage systems, applications and data from all subscribers coexist on the same cloud network environment. Public cloud environments offer have many benefits, among them is their characteristic large size than the business-owned private clouds, thus they can allow businesses to scale up or down to meet different requirements for computing resources. In addition, public clouds shift computing environment risks like security or privacy breaches, technical support in case of failure and disaster recovery from the subscriber to the cloud vendor. The major pitfall in the adoption of the public cloud model is that subscribers have absolutely no direct control of their IT resources running on the cloud, thus it poses serious threats with respect to security, privacy and confidentiality breaches. However, as a best practice, consumers and public cloud vendors should sign Service Level Agreements (SLAs) to ensure that computing resources hosted in the cloud are protected from security and privacy risks as well as disaster recovery to uphold high levels of uptime.

Private clouds are owned and run by individual businesses, whereby a company plans, develops and manage its own virtual resources in a manner that meets the demand of various departments and locations of presence. It is a viable cloud-based solution that overcomes security and privacy risks presented by the multi-tenancy public cloud environment. Budrienė & Zalieckaitė (2012) notes that private clouds are exclusively created and used by a specific business, thus provides the best control over the security, confidentiality, privacy and integrity of IT resources and associated services. Therefore, the major aim of building a private cloud is provide a business with more control over computing resources as well as an end-to-end visibility into cloud operations. Otherwise, such capabilities would not be achievable if a third-party cloud vendor was tasked with hosting some or all company’s IT resources. Typically, private clouds run behind the firewall of a business and never on the internet facing side, thus only authenticated internal people can access and use the cloud resources. As such, private clouds are best cloud model in upholding availability, manageability, resiliency, security, and privacy.

A hybrid cloud combines the public and private cloud deployment models. Therefore, businesses that adopt the hybrid cloud model run a private cloud as well as contract some cloud services from a third party. It introduces complexities related to determining the IT resources to host on the private cloud and the ones to distribute to public clouds as well in its general execution. Budrienė & Zalieckaitė (2012) proposes a rule of thumb that business critical and sensitive computing resources should be internally run on the private cloud and then distribute the others to the public clouds as necessary.  The hybrid cloud model offers an effective and efficient infrastructure for the stakeholder population on the private cloud while others are redirected to the public clouds. Less sensitive applications and services such as customer registration, general queries and FAQs should be hosted on the public cloud while private cloud should host more sensitive ones such as order tracking and payment. Such distribution of IT resources between the private and public clouds plays a key role in upholding security and privacy.

Figure 1 shows the three cloud computing service models – public, private and hybrid clouds.

Figure 1: Cloud computing service models – public, private and hybrid clouds (Budrienė & Zalieckaitė, 2012)

2.3 The future of data centers

Today, the world is alert with increasingly deteriorating climate changes and environmental degradation, and “greener” IT alternative has been identified as one of the potential remediation approaches. Data centers and environmental sustainability are inseparable if at all we mind about the future generations (Nikos & Gillam, 2010). It makes sense that consolidating business IT resources into private, public and hybrid infrastructures run by data center and cloud specialists would considerably reduce adverse environmental impacts while unlocking new efficiencies. The following are the key elements that will be continually implemented and bolstered in data centers to effectively reduce their environmental footprint (Molen, 2012):

  • Dynamic provisioning: enables proper allocation of IT infrastructure – servers, storage and networking in a manner that meets specific application demands. Dynamic provisioning adjusts capacity accordingly to match demand fluctuations over time through automated and intelligent demand and load predictions.  As such data centers must reduce potential inefficiencies which may arise from over-allocation of IT resources.
  • Enhanced data center efficiency: data center designs must be optimized to be more power-efficient in the areas of routine powering and cooling IT equipment, lighting and small-scale non-IT uses, while allowing IT equipment to run optimally. The way physical facilities are constructed and IT equipment are implemented and managed greatly impact on the actual energy use. Economies of scale and innovative techniques can considerably enhance power usage efficiency. Innovative approaches such as optimized power supply, tapping atmospheric air for cooling purposes, and modular designs can enhance power efficiency in data centers. Economies of scale is achieved by consolidating IT power and bolstering access to flexible capacity to consolidate IT efficiency. 
  • High server utilization to power consumption ratio: data centers should run at high and stable utilization rates while consuming relatively low power in a manner that simulates high server performance gains but with relatively low power consumption rise. Virtualization is a key enabler of improved server utilization in data centers as it allows many virtual servers to run on one physical server. Virtualization enables scaling virtual IT resources to suit application demands instead of using the entire physical equipment for a task resulting to better utilization. For example, an increase in server utilization rate will definitely result in an increase in power consumption, but it should be relatively less as shown in Figure 2.

Figure 2: Server utilization rate and energy consumption (Nikos & Gillam, 2010)

Apart from energy efficiency, environmental preservation and enhanced data center utilization, the world is increasingly experiencing data explosion reaching approximately 35 zettabytes. Data centers must be built in a way that will efficiently and effectively host the growing data needs. Networking should also be enhanced to address the communication needs for this big world’s data. On-demand data centers is a technology that has emerged to creatively and quickly deploy data centers on-need basis, for example, packing a data center and implementing it on demand to reduce construction cost and time. Data security and privacy will continue to be a critical issue that is expected to be boosted in data centers in an attempt to create public clouds with equivalent security threshold as that of on-premise data centers. Srinivasan (2013 ) argues cloud security remains a serious threat since only large-scale data centers run by multinational firms like Google and Amazon are secure and energy-efficient, representing approximately 5%. What about innovating affordable and easy to implement techniques to suit the smaller players in cloud computing? What about deploying data centers in locations such as under the sea for water cooling? Such initiatives are expected to be fully brought to life in future. 

The flash memory has been widely used in interfaces such as the DIMM sockets in DRAM memory to provide some degree of non-volatile memory. The future will see servers use of non-volatile memory such as STTRAM to replace the processor’s internal cache memory and provide instant-on capacity without need for refreshes. The shift from volatile to non-volatile architectures will enable enhance energy efficiency of data centers. Caching exploits to speed up access to business critical data will also be boosted through use of SSDs in data centers. In addition, data centers are like the brain of businesses as they host much of company’s applications and data. As such, aspects of artificial intelligence and neural networks for machine learning will be built into data centers for automated aptimization purposes (Molen, 2012).

3.0 Cases of businesses moving their I.T. systems to the cloud

This section explores three different sized businesses (small business, medium business and large business) moving their IT systems to the cloud, including email, POS, database, file storage, web and application servers to assess the pros, cons and estimate cost that apply to each case. Stieninger & Nedbal (2014) argues that moving IT resources to the cloud seems to be accompanied by immense benefits, but it definitely has a number of drawbacks. Therefore, it is important to consider the entire picture – both the pros and the cons to identify the options that provide the best results as well as mitigate against potential risks.

3.1 Small business

3.1.1 Pros

Small businesses are typically constrained by funds to invest in capital IT equipment, and the cloud is a cost-effective option to run application servers and software, database, email as well as store data. Cloud-hosted application severs enable large-scale computing power while reducing internal IT requirements, physical storage and IT specialists, providing significant savings. The public cloud model under IaaS has emerged as a vital tool for small businesses as they lack massive capital to invest in on-premise data centers that require considerably high capital expenditure.  With IaaS, small businesses neither need to buy hardware, networking and storage equipment nor maintain them. The cloud provides a pay-as-you-go platform, thus small businesses can control their cloud usage to cut on unnecessary costs. SaaS is also important for small businesses to eliminate the need for expensive equipment for on-premise software hosting. Common applications may include software, email and database (Kepes, 2012).

In the area of ‘Green IT’, small businesses perform the best since they have less IT requirements, and it is possible to migrate virtually all their IT infrastructure to the cloud (Nikos & Gillam, 2010). Consequently, energy use, electronic waste and carbon footprint is significantly low.

Today, computing is inclined towards ubiquity – anytime, anywhere computing, allowing users to conveniently, reliably and remotely access their files and applications at anytime, from any location, and on any device. Personal files such as emails or group files may be hosted on the cloud for easy access as opposed to being stuck on PCs. With ubiquity, cloud computing has eased collaboration as team members can access files and work from one master document hosted on the cloud. Security is implemented through permission and access controls on files (Nikos & Gillam, 2010).

Cloud computing reduces security risk because it backs up data in an off-site location, decreasing the chances of cyber-security attacks such as hackers and malware propagation. Security is likely to be better in the cloud provider’s network particularly if the provider is compliant with key industry security standards such as ISO framework (Srinivasan, 2013). In addition, loss of a PC does not affect the availability of data residing in the cloud. The cloud backups play an integral role in restoration of applications and files, thus upholding high level of information assurance and business continuity.

It is also worth noting that cloud computing for businesses improves efficiency. Cloud computing provides a way for saving time in the construction of an IT infrastructure and eliminate hassles related to the daily management of computing resources. The cloud has enabled small businesses to concentrate on the essential business operations as the IT resources are provided as a service by cloud providers. With cloud computing and more specifically web services, small businesses are enabled to easily integrate with social networking platforms such as Twitter, LinkedIn, Facebook, Flickr, Focus.com and Google+ which offers unique means of advertisement to reach out to potential customers, employees and partners (Budrienė & Zalieckaitė, 2012).

Every Small business has expansion plans by increasing products, services, or venturing into new locations. In addition, the business environment is subject to change from a myriad of factors such as legislation and economic crisis. Cloud computing enables a business to efficiently and effectively evolve its IT infrastructure to meet evolving demands.  Cloud computing provides access to application servers across the globe, offering small businesses an opportunity to expand their reach and boosting agility with least operational cost. SmartNet North America is an example of a company that used the cloud successfully to reach surveyors, engineers and construction specialists across the world, while its employees are also widely spread since they only need bandwidth to access company resources from anywhere contributing to expansion (Pariseau, 2014).

3.1.2 Cons

Small business often depend on consumer-grade and dial-up internet services, which may introduce bandwidth challenges in the course of accessing required IT resources from the cloud. Unstable internet connection introduces problems when accessing cloud services. In addition, the cloud may add significant costs as small businesses outsource faster internet services to overcome the bandwidth issue (Budrienė & Zalieckaitė, 2012).

According to Budrienė & Zalieckaitė (2012), the cyberspace is under attack and small businesses on the cloud face dire security and privacy challenges on daily basis. Data-stealing malware such as Zeus are rife on the web, introducing security risks such as personal data loss and infringement to intellectual property. Email scams are also rampant online, which tricks users into clicking on malicious links or opening attachments to trigger malware downloads or steal data.  Consequently, security and privacy breaches attract financial penalties, reputation damage and hindrances to optimal business performance.

The case for small businesses is further constrained by the fact that unlike medium and large businesses, their devices may not have the required security threshold in terms of authentication and encryption.  In addition, small businesses are under tight budgets to invest in internal security software and prevent email and web-based malware threats from reaching the business (Budrienė & Zalieckaitė, 2012). As such, most remote access sessions for small businesses are prone to security and privacy attacks.

Being on the cloud provides a connection to social networking environment that offers an advertisement platform as well as a pool of important insights to derive business value. However, cybercriminals have identified social sites as a place to launch rogue popup applications and malicious links to steal data or execute malware attacks. Therefore, despite the business sense made by social sites, they present serious security threats.

File storage outside the internal network, and probably beyond national and/or regional boundaries may break local or state data protection laws and regulations (Budrienė & Zalieckaitė, 2012). Lastly, there may be unexpected outage due to some form of failure on the cloud provider’s end causing unwanted business interruptions.

3.3.3 Estimate cost

Kepes (2012) estimates the cost of a 4GB RAM, single core Linux-based cloud application server at $122.64 per month. However, a Windows-based equivalent server goes at approximately $160.00 per month. A 1GB cloud MySQL database costs about $60.64 per month. For file storage, 500GB space cost $61.25. Due to budget constraints in small businesses, free cloud services may be a better option for the email (using platforms such as Gmail) and POS (for example, using VendHQ which is a reliable cloud-based POS system). VendHQ for small businesses is offered free for up to 10 users which provide a better POS solution (Vend Limited, 2014). For web services, the least price is $100 for businesses running on Amazon with AWS-based pricing.  Amazon and Rackspace are among the leaders in renting cloud services, and they are certainly affordable for small businesses from the figures shown here. However, factors such as implementation, configuration and support which apply to application software and databases are additional cost factors (Marks, 2012). The sense behind this is that the cost of acquiring a server is independent of the implementation and such factors. In addition, if the factor of the number of users is introduced the cloud cost significantly increases; for example, a 10 user MySQL database may result into 6 times the specified cost. Therefore, the estimate cost for email, POS, database, file storage, web services and 2 application servers is at least 541.89 per month, but due to factors identified here the figure is typically more than this.

3.2 Medium business

3.2.1 Pros

Cloud computing is analogous to outsourcing computing, networking, storage and associated services.  Therefore, it is a vital resource for medium businesses to enable them focus on their core functional areas, where much of their expertise rests. Cloud computing eliminates the need to invest hugely on resources such as time, labor and money in building, running and maintaining data centers.  The cloud is a bankable and sustainable solution to running on-premise data centers (Nikos & Gillam, 2010).

Medium business are likely to enjoy the benefits of PaaS as opposed to small businesses, because the larger the business the more likely it develops in-house software applications to support different business lines, locations or new products. Cloud computing offers medium businesses access to PaaS, where applications may be developed and tested without the need to purchase additional hardware or software. This way, medium businesses can deliver applications affordably, more efficient and much faster (Kepes, 2012).

These two points are a clear indication that cloud computing does not require huge infrastructure, leading to cost savings. Such savings may be directed towards other business areas such as advertisement and marketing to drive growth, profitability and competitiveness.

The virtual nature of cloud computing allows medium businesses to expand, because scalability is easily achieved with minimal physical infrastructure. As soon as a business kicks off an expansion program, may it be setting up new branches around the globe or sending employees far away to reach a target audience, cloud computing makes it extremely easier (Babar & Chauhan, 2011). After all, there is no need to set up set up IT infrastructure in every new location of operation since internet connection is enough to allow access to business computing resources provided as a service.  

Backup and on-demand recovery: cloud providers provide flexible and reliable data backup and recovery solutions to medium businesses. In its simplest form, cloud computing is a form of backup since data is not stored in on-premise PCs. It allows medium businesses to store their data away from their local machines for disaster recovery and business continuity purposes in the event that a security incident arises. Cloud computing offers redundancy and resiliency to implement automatic failover between different hardware (Babar & Chauhan, 2012).

The cloud has a huge and easy to scale storage capacity (Budrienė & Zalieckaitė, 2012). This eliminates the worry associated with running short of storage by providing a virtually unlimited storage space. At the same time, cloud computing spares medium businesses hardware upgrades and updates, further decreasing the IT cost.

Cloud computing facilitates location independence and computing device diversity (Budrienė & Zalieckaitė, 2012). As such, the cloud can be accessed through a myriad of computing devices as long as they have the internet access feature. PCs, smartphones, tablets, laptops and intelligent wearable devices can access cloud services. Consequently, medium business can effectively adopt the Bring Your Own Device (BYOD) practices, allowing staffs to use their personal devices on their workplaces. BYOD increases workforce productivity, flexibility, efficiency and convenience because employees can execute their duties away from their desks.

3.2.2 Cons

The multi-tenancy nature of the public cloud environment triggers inconsistency issues. An increase in the number of consumers may reach a level that the vendor infrastructure cannot handle. Cloud providers may declare that performance is guaranteed, but an unexpected burst in client requests or network failure may disrupt normal business operations. Outages, downtime and such frustrations may make cloud investments appear unworthy for banking on. Again, the cloud and its mean of access are highly dependent on internet connection, thus any connectivity or communication network issues can render the entire cloud useless (Dutta et al., 2013).

Cloud computing face security challenges. Cybercriminals have continually devised tools and techniques to exploit security weakness they may come across in the cloud, increasing cases of data loss from hacking, social engineering and phishing. Denial-of-Service is another security attack that can lead losses and reputation damages in the event that a business’s IT resources running from the cloud are affected (Srinivasan, 2013). Therefore, is the cloud truly dependable for continuous business operations?

Vendor lock-in is a major con with respect to the public cloud model since businesses implicitly depend on the cloud provider (Stieninger & Nedbal, 2014). This way, it is quite difficult to move from one cloud vendor to another upon rolling with one. It is a painful challenge to shift huge data between cloud providers. In addition, it is generally difficult to implement compliance with relevant data protection laws and regulations because consumers have limited control over the underlying applications that run business data. Public cloud consumers have limited control and visibility into the underlying hardware and software, further placing almost all cloud control under the vendors. Therefore, it is important to always thoroughly asses what a provider is offering to ensure the best vendor option is picked. This way, it will be possible to arrive at the cloud provider offering the best services, and minimize reasons for shifting to another provider.

3.2.3 Estimate cost

A medium business may demand similar or a little more cloud requirements compared to small businesses. The estimate cost of an 8GB RAM, duo core Linux-based cloud application server at $202.64 per month and a Windows-based equivalent at $251.58 per month. For the database, a 2GB MySQL database costs about $121.64 per month. Medium businesses require approximately 1000GB storage for files costing $90.00 (Kepes, 2012). Similar to small businesses, free email service providers like Gmail would be a perfect solution. For medium businesses, VendHQ POS costs $69 (Vend Limited, 2014).  For web services, medium businesses can fit under medium utilization Amazon option costing $222.54 (Amazon, 2014).  The overall cost is approximately $957.4 per month on the lower end as the number of users per service constitutes an additional cost factor.

3.3 Large business    

3.3.1 Pros

Large businesses have the financial capacity to run a hybrid cloud model, with business critical and sensitive computing resources hosted on the on-premise data centers that are well equipped while less sensitive ones are distributed across appropriate public clouds. Then, these businesses can focus more on their core business as the cloud environment is managed by specialists (Molen, 2012). The BYOD practice may also suit large businesses to allow employees use their personal devices on the job, thus increasing convenience, efficiency and productivity. After all, applications and data reside in the cloud.

In addition, large businesses are best suit to adopt all the cloud service models – SaaS, PaaS and IaaS (Molen, 2012). For example, they can use the PaaS to develop and deploy software applications quickly to instantly meet various business needs. On the other hand, large businesses can leverage the elastic functionality of IaaS to grow their computing infrastructure at the pace of demands. Expensive software such as cloud-based SAP may also be outsourced under SaaS without the need to invest in additional hardware or software infrastructure.

Large businesses have the capacity to apply cloud vendor’s Application programming interfaces (APIs) in a bid to automate their operational tasks. Most small-to-medium businesses would be resource-constrained to exploit such public cloud capabilities. APIs are important for automation of monitoring, scaling and provisioning; therefore they are the basis for everything that pertains to building required functionality within a cloud vendor’s environment. As a result, large businesses are offered the capacity to unlock more efficiencies and performance in the cloud (Molen, 2012). For example, a business can provision more storage or computation to address increasing demands.

Cloud computing allows large businesses to expand their global presence and establish themselves across the world. For example, with IBM’s SoftLayer, cloud-based data centers and associated Points of Presence (POP) can be located worldwide and connect the data centers via high-speed fiber to provide a coordinated global infrastructure. Multinational companies such as Toyota are able to coordinate their global operations via such cloud computing capabilities and have potential to expand even further (Stieninger & Nedbal, 2014).

3.3.2 Cons

The following are the key cons of cloud computing with respect to large businesses (Alali & Chia-Lun, 2012):

  • The public cloud or internet connection may suffer outages and disrupt normal business operations. Possible downtime may result to poor business productivity, financial losses or damaged reputation.
  • Security and privacy risks are posed by the cyberspace and multi-tenant public clouds.
  • Despite initial cost savings, SaaS have a lower Total Cost of Ownership (TCO) in the initial years because there are no huge capital investments for licensing or support. Later, the monthly charges may surpass the cost savings.
  • Inflexibility issues: moving computing resources to a public cloud vendor amounts to locking a business into a proprietary environment with little or no capability to shift to another vendor due huge data that may need to be transferred.
  • Poor support for applications and services hosted on public clouds and developed by third-party vendors, leaving only FAQs and online forums as the only means of support.
  • Latency issues when connection to remote cloud services due to traffic bursts.
  • Incompatibility issues due to the wide variety of devices expected to access the cloud.     

3.3.3 Estimate cost

Amazon (2014) categorizes large companies under its Option 1 pricing, with web services and database servers termed as on-demand and heavy utilization services at a cost of $609.92 per month. For email, large organizations may be well suit by Google Apps for work goes at $10 per month per user (Google, 2015). VendHQ POS costs $199.00 per month for large businesses (Vend Limited, 2014). A 16GB RAM, 4-8 core application servers approximately costs $502.64 per month. For file storage, $168.00 may be incurred by large businesses (Kepes, 2012). The average cost for these cloud services is approximately $1600.00. 

4.0 Conclusion          

Cloud computing has emerged as a critical business success enabler since 1950s when a single mainframe computer would serve many thin clients, thus saving on costs. Businesses, regardless of size or type have moved their IT resources to the cloud to leverage the benefits it comes with. So, should businesses bank on the cloud benefits and venture into the cloud? It is necessary to consider at the pros, cons and estimated costs to determine the readiness for moving to the cloud. This way, a business will be in a position to decide whether it is right or wrong to adopt cloud computing at any specific time. It is apparent that despite the obvious necessity to consider moving to the cloud and avoid missing out of its benefits, cloud computing pros and cons must be well understood prior to rushing off to the perceived benefit. Cloud computing promises an efficient, faster and affordable means of operating businesses, if the cons can be adequately managed.  

Cloud computing provides valuable benefits to businesses, whether small, medium or large. These benefits include: a pay-as-you-go platform that enables a business to control what it consumes, minimal or zero initial costs of IT infrastructure, high elasticity allowing for easy scaling of computing resources, ubiquitous access to IT resources, energy efficiency and environmental protection, improved employee and business efficiency, and competitiveness. However, threats to data security and privacy and cloud outages are the major cons with cloud computing, implying that the cloud is an important resource for businesses if such cons are remedied. As such, with a reliable cloud service provider that is completely accredited and regulated by independent global cloud standards organizations to an acceptable degree, cloud computing is one of the most vital enablers of business success. Such certified cloud providers guarantee consumers that their IT resources are protected by up to date security systems and standards preventing data loss. There is an up-to-date back-up to restore from in the event of data loss from a PC, thus there is no need to worry as all the applications and data are securely stored in accredited cloud vendors’ data centers. Such backups play an integral role in upholding information assurance and business continuity.

 

 

 

 

 

 

5.0 References          

Alali, F., & Chia-Lun, Y. (2012). Cloud Computing: Overview and Risk Analysis. Journal of

Information Systems, 26(2), 15-33.

Amazon. (2014). How AWS Pricing Works. Amazon. Retrieved from Retrieved from

https://media.amazonwebservices.com/AWS_Pricing_Overview.pdf

Babar, M.A., & Chauhan, M.A. (2011). A Tale of Migration to Cloud Computing for Sharing

Experiences and Observations. ACM, 11(5), 50-56.

Budrienė, D., & Zalieckaitė, L. (2012). CLOUD COMPUTING APPLICATION IN SMALL

AND MEDIUM-SIZED ENTERPRISES. Issues of Business & Law, 4(1), 201-230.

Dutta, A., Guo, C., & Choudhary, A. (2013). RISKS IN ENTERPRISE CLOUD COMPUTING:

THE PERSPECTIVE OF IT EXPERTS. Journal of Computer Information Systems, 53(4), 40-47. 

Google. (2015). Google Apps for Work. Google. Retrieved from

https://www.google.com/intx/en_nz/work/apps/business/pricing.html

Kepes, B. (2012). Cloudonomics: The Economics of Cloud Computing [whitepaper].

Rackspace.com. Retrieved from http://www.rackspace.com/knowledge_center/whitepaper/cloudonomics-the-economics-of-cloud-computing

Marks, G. (2012, January 23). Price Of The Cloud Still Out Of Reach For Small Businesses.

Forbes. Retrieved from http://www.forbes.com/sites/quickerbettertech/2012/01/23/price-of-the-cloud-still-out-of-reach-for-small-businesses/

Molen, F. (2012). Get Ready for Cloud Computing (2nd ed.). Van Harem.

Nikos, A., & Gillam, L. (2010). Cloud Computing: Principles, Systems and Applications.

Springer Science & Business Media.

Pariseau, B. (2014). Small businesses drop data centers for advantages of cloud computing.

SearchCloudComputing.com. Retrieved from http://searchcloudcomputing.techtarget.com/feature/Small-businesses-drop-data-centers-for-advantages-of-cloud-computing

Smoot, S.R., & Tan, N.K. (2012). Private Cloud Computing: Consolidation, Virtualization, and

Service-oriented Infrastructure. Elsevier.

Srinivasan, S. (2013). Is Security Realistic In Cloud Computing? Journal of International

Technology & Information Management, 22(4), 48-65.

Stieninger, M., & Nedbal, D. (2014). Characteristics of Cloud Computing in the Business

Context: A Systematic Literature Review. Global Journal of Flexible Systems

Management, 15(1), 60-68.

Vend Limited. (2014). Online Retail POS Pricing. Vend Limited. Retrieved from

http://www.vendhq.com/pricing

The checkout procedure implemented by the FreshDirect site

The following steps outline the checkout procedure implemented by the FreshDirect site and presented in Figure 1:

  1. First, register with the site.
  2. Navigate the product links or use the “Search Box” for specific items.
  3. Specify quantity and other options as necessary.
  4. Click “Add to Cart” to add items to the basket. One can also “Save to List” as shown in Figure 2.
  5. Go to “Checkout”.
  6. Specify delivery and payment information, and submit the order.
  7. FreshDirect sends order confirmation email and delivers the ordered items.

Figure 1: FreshDirect’s checkout process

Figure 2: The “Save to List” feature.

Pros

  • The merchant has a truly easy to use and robust website that implements a completely linear and clear checkout process as shown in Figure 1.
  • Has a clear checkout progress indicator to show what has been completed and remaining steps (see Figure 1).
  •  Has proper pop ups on hover, navigation and flow – nice design.
  • “Carries” the shopping cart along the checkout to enable customers have a look at it per-need basis.
  • Customer experience is enhanced through clear steps and zero distraction (no recourse to other links) when carrying out their shopping. 
  • Uses well-known shortcuts such as “+” and “-” to add and reduce quantity.
  • The calendar is great and one is not left to guess time and/or dates.
  • Nice fill-in forms and clear information presentation.
  • One can review items added to the cart at any stage, view order summary and make toll-free enquiries as shown in Figure 3, thus customers cannot be trapped in the process.
  • There are also other enquiry options – FAQs and email.
  • The “Save to List” feature allows one to personalize shopping lists based on favorite items for future plans and needs.

            Figure 3: FreshDirect’s flexible cart and toll-free contact for help

Cons and recommendation

  • FreshDirect offers free deliveries to an apartment for up to three times, but what would happen if the previous occupant vacates and a new one orders from the same apartment? Such deals should be well communicated and implemented to prevent eligibility disputes which might damage customer experience.
  • Despite the linear process, the checkout requires creation of an account upfront instead transitioning from the shopping cart to delivery details, which constitutes time wastage by forcing one to fill in everything beforehand. Instead, FreshDirect should just prompt for email address where the order conformation details can be sent, and allow for later account creation to prevent issues of checkout abandonment.
  • FreshDirect does not provide any security authenticity seals from cybersecurity third-parties such as COMODO, TRUSTe or Verisign, thus potential customers may feel insecure when prompted for private information such as residential address, credit card details and personal contacts.  Online shopping is trust-dependent, thus security seals from trusted third-parties should be included in the checkout steps or at the footer to bolster customer confidence with regard to security and privacy of their data.
  •  Once in the checkout steps shown in Figure 1, there is no explicit link to continue shopping, leaving customers on their own to figure out how to go about adding more items to carts. FreshDirect should provide a signal “Continue Shopping” link to speed up the shopping and checkout processes and provide guidance accordingly.
  • There is no option for printing or emailing cart contents. However, one may be shopping for somebody else who may need to verify or sign off the list. Therefore, a print or email option is necessary to give consumers an opportunity to have a purchase list for verification with concerned parties prior to actual shopping.

Reference

FreshDirect. (2015). Welcome to FreshDirect. FreshDirect. Retrieved from

https://www.freshdirect.com/index.jsp

The theory of character of Athens is evident in architecture

The theory of character of Athens is evident in architecture

The theory of character of   Athens is evident in architecture, city research, urban space, urban development and urban planning.  The character of Athens defined the urbanism in Greece and beyond.  Urban development can be adequately influenced by the characteristic logic of factors that makes a city stand out (Hall 26).  Athens is the origin of many artistic ideas and classical civilization and western civilization is considered to have originated form there thus it constitutes a pivotal role in a wide array of fields, urbanism being one of them (Camp 43).

Athens has several human, natural and cultural characteristics: the ruins, Athens Academy, the Partheon, Museum of Athens, recreation facilities, architecture, transport system and many more (Camp 46). Philosophical thoughts of which Athens is not an exception shapes architectural practice and theory (Lang & Moleski 1). The rationalists originated from platonic philosophy while empiricists have roots in the philosophies of Spinoza, Leibniz, and Descartes. Empiricism is associated with architectural ideas and urban design, ideas that spread in the 20th century across Europe. Designers who are rationalists rely on reasoning to establish ideal urban plans and buildings (Lang & Moleski 3). Empiricist designers rely on their views of what functions and what does not, and borrows a lot from precedents (Lang & Moleski 3). What is a good place and building? In the future, designs still have to function. Proposals for the civilization of machine age evident in designs for Citrohan houses, Maison Suise and Cite Universitaire, Paris in the early 1920s greatly influenced the development of urban design in the 20th century (Bairoch 29). (Ward 67) charts the creativity of the 20th century and social improvement agendas after 1945’s hopes of reconstruction and argues that many cities experienced new planning ideologies based on sustainable urban growth in the 1990s.

Athens is the largest and capital city of Greece with a recorded history of approximately 3,400 years.  The first contemporary city plan was defined by Acropolis and consisted of a triangle, the new Bavarian king palace and the prehistoric Karameikos cemetery (Kyriakopolous 79). For almost the entire 19th century, Neoclassicism was the major architectural approach through which French, Bavarian, and Greek architects, for example, Hansen and Klenze designed the earliest important public structures in Athens (Camp 63). The early 20th century experienced some deviations from Neoclassicism to eclecticism. In the 20th Century, modern architecture such as Bauhas and Art Deco started influencing almost every Greek architect, and both public and private buildings were constructed based on these styles (Lang & Moleski 7).  Universal ideas of modern architecture were depicted: design deriving absolutely from its purpose, elimination of unnecessary details, 90 degrees at corners, visual expression of buildings, unhidden natural appearance of materials, use of industrial materials, and emphasis on vertical and horizontal lines (Hall 61). During the development and expansion of Athens from 1950 to 1960s, international style and other modern movements played a key role and several neoclassical buildings were demolished as Greece rapidly urbanized and industrialized. International style de-emphasized symmetry at the expense of balance, mass at the expense of volume and applied ornament (Hall 81).   

In 1921, the organized progress of Athens city was interfered with and random development began, for around 1,500,000 Greeks, mostly penniless returned to Greece from Asia Minor and marginalized ethnic groups were exchanged between turkey and Greece (Bairoch 112).  Despite efforts by government to resettle them elsewhere, majority of them thronged into shanty towns around the periphery of Athens and Piraeus soaring the area’s population from 473,000 to approximately 718,000. The city then started spreading in two directions, northward towards Kifisia village and southward toward Piraeus (Tournikiotis 84). The Acropolis played a vital role in urbanism. This hilltop housed temples, the famous Parthenon and several other public structures that enhanced Athenian character. Ancient Athenians had several significant ideas and were very thoughtful and they enjoyed the logical study s science, history, and philosophy, to name a few. Athenians heavily emphasized on the architecture, arts, and literature. They built thousands of statutes and temples that personified their knowledge of beauty.  The term “classical” describes their enduring artistic and architectural styles. Further, Athenians enjoyed a government constituted democratically whereby some people shared power (Camp 47).

In 1940s, Athens experienced shocking things. For the period of German occupation, several inhabitants died from hunger and the city started to disintegrate due to lack of maintenance.  After Germans left, the civil war started because some of the allied-equipped fighters did not put down its arms. With its modern shops and tall buildings, Athens is the first city in Europe considered from the Middle East (Kyriakopoulos 63).  Athens is unique as it cannot be described as a combination of the West and the East. Rather it is Greek or Athenian to be specific.  Following the death of Pericles in 429 BCE, Athenians went into a bondage period that lasted for approximately 2,000 years. In 1833, the city was freed and in the next 170 years, a scene of dozens of revolutions was experienced (Camp 78). In addition, civil war and cruel foreign occupation took place. This extensive history of suffering and passion had a great impact on the Athenian character. The basis of that character is a relentless desire and passion to survive, reinforced by a deep sense of patriotism and allegiance to the family. The Greek Orthodox Church, directed by a council residing in Athens, became a main strength in keeping alive the literature, tradition, and language when such affairs were prohibited yet majority of the people still  supported them (Tournikiotis 11). Instead of disintegrating Athenians, the long spell of oppression honed their intelligence and rendered them strong but supple, while preserving their generosity and warmth.

In 1833, Athens was almost no there. Before the fight of independence calmed down, the city held around 4, 000 people sprawled in small houses on the northern side of the slopes below Acropolis.  Otto, the lately imported Hellenes king, was installed into the one and only two-storey stone building, while his architects hurriedly designed for a palace and a fresh Athens in the neighboring fields.  Below the well-located but very simple palace, a Sintagma, an extensive garden square was laid out. Wide streets were built and are still the city’s principal main roads (panepistimiou and stadiou), between which a tidy grid of narrow flank streets was designed. The kind of housing that developed was commonly the type of architecture typical of Victorian London. This implies: porched, solid, rather impressive, the later imitations monotonous and graceless. In Athens it is referred to as the Ottonian style (Kyriakopoulos 11).  

It is no doubt that the city of Athens forms the most significant cultural center of the Ancient international classical period. The advent of new cultural forces and political centers radiated the city’s importance and it became an essential artistic center. The location of the city places it at an utmost level for its heritage.  The urban planning, art and architecture of Athens city is particularly interesting due to the overall character of the city characterized by love for beauty and democracy among others (Kyriakopoulos 11).  Acropolis remains the initial core that can explain the general principles behind the 20th Century urbanism in Athens city and beyond.  Athens has been a recurrent urban center since its inception to the 20th century. Athens has been characterized by progressive city leadership since the ancient period coupled with excessive environmental degradation and population in the modern era of 1990s (Kyriakopoulos 13).   .

Since members of polis – city state lived in both the town center and the surrounding villages; Athens somehow dictated togetherness among its inhabitants and the neighbors. The city-state has a concentration of urban dwellers in a central city made possible by commercial agriculture that generate adequate storable food for consumption by the urban non-food producers. Urbanization cannot happen without a concentration of a large number of inhabitants and without agriculture, there is a less population concentration.  Athens is an example of a city-state. Moreover, Rome influenced the urban planning and architecture over several centuries thus the Graeco-Roman human race was characterized by an extremely urban culture (Bairoch 55).

Other characters that are evident in the 20th Century urbanization are: the Greek’s written alphabet had been invented by 700 B.C, ancient Greeks devised coined money, ancients Greeks invented central banks, cultural function for Greek citizens were conducted from Athens, and the public functions place – Agora was the focus of urbanism.  One can therefore argue that the ancient Athens civilizations effectively resulted to positive city-states in Greece and the 20th century bloated urban population as well as urban planning.  For almost the entire 19th century, Athens continued being a relatively small capital city of Greece, mainly serving as a tourist center due to its ancient monuments.  In the 20th century, the estimated population of Athens was over 3,000,000 by 1996 from approximately 100,000 in 1900 (Bairoch 89).

The expansion of Athens, especially since World War II, has resulted to many environmental issues.  Air quality has been adversely affected due to high population density and a large number of automobiles (Hall 93). Traffic congestion is an issue as the initial city plan was not designed to handle millions of traffic.

Once Athens city was established, it started growing exponentially. By 1907, the city’s municipality had hit 167, 479 inhabitants.  Harmony Square (Omonia) had been constructed at the western edge of the two main avenues, with other broad streets radiating from it though it did not meet the hoped-for balance requirement to Sintagma. The railway to Piraeus had been built by then. The railway station was constructed near antique Agora. Athens plan launched a southward logical growth along this axis which impelled a real estate developer to beckon northward. The palace garden nearly came into contact with the Arch of Hadrian and the huge columns of the Olympian Zeus temple.  The garden has turned out to be one of the rare public recreational areas in Athens. Academy of Athens was built along Panepistimiou Avenue, in marble obtained from Pentelicus Mountain. Its new neighbors were the national Library and the University of Athens (camp 123).

In conclusion, Athens is and will remain a city on the hill and into its slopes. In the 20th century, despite high population, the character of Athens has significantly influenced urbanization to Greece and beyond.  Athens stretches across the plain of Attica – Attica basin which is bound by four big mountains: Aegaleo Mountain to the west, Parnitha Mountain to the north, Penteli Mountain to the northeast and Hymettus Mountain to the east of the metropolitan area of Athens (Kyriakopoulos 39).

References

Bairoch, Paul. Cities and Economic Development: From the Dawn of History to the Present.

University of Chicago Press, 1991. Print.

Camp, John. The Athenian Agora: Excavations in the Heart of Classical Athens. Thames &

Hudson, 1986. Print.

Lang, Jon and Moleski, Walter. Functionalism Revisited: Architectural Theory and Practice and

the Behavioral Sciences. Ashgate Publishing Ltd, 2010. Print.

Hall, Peter. Cities of Tomorrow: An Intellectual History of Urban Planning and Design in the

Twentieth Century. Wiley-Blackwell, 2002. Print.

Kyriakopoulos, Victoria. Athens. Lonely Planet Publications, 2009. Print.

Tournikiotis, Panayotis. The Historiography of Modern Architecture. MIT Press, 1999. Print.

Ward, Stephen V. Planning the Twentieth-Century City: The Advanced Capitalist World. Wiley,

2002. Print.

What is information assurance?

What is information assurance?

1.0 Introduction. 2

1.1 Project description. 2

2.0 Review of other work. 3

2.1 What is information assurance?. 3

2.2 Human factor as vulnerability. 5

2.3 The 20 critical security controls. 6

3.0 Rationale. 8

4.0 Systems analysis. 12

5.0 Project goals and objectives. 14

6.0 Project timeline. 15

7.0 Project development. 15

7.1 Project execution. 16

7.1.1 Inventory of Authorized and Unauthorized Devices. 16

7.1.2 Inventory of Authorized and Unauthorized Software. 17

7.1.3 Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers  18

7.1.4 Continuous Vulnerability Assessment and Remediation. 19

7.1.5 Malware Defenses. 19

7.1.6 Application Software Security. 20

7.1.7 Wireless Access Control 21

7.1.8 Data Recovery Capability. 22

7.1.9 Security Skills Assessment and Appropriate Training to Fill Gaps. 24

7.2 Problems encountered. 26

7.3 Changes to original plan. 27

7.4 Unanticipated requirements or components that needed to be resolved. 27

7.5 Actual and Potential effects of this project 27

7.6 Success and effectiveness of the project 28

8.0 Additional deliverables. 29

Appendix A: Project Timeline. 29

Appendix B: Primary attack types. 30

9.0 References. 31

1.0 Introduction        

1.1 Project description

This project entails creating an informational assurance program for the Computer Sciences Corporation (CSC), a leading Fortune 500, Information Technology (IT) services, and solutions multinational company headquartered in Falls Church, Virginia. The information assurance program is aimed at ensuring that CSC complies with best practices, principles and standards regarding data and information systems security based on the Council on Cyber Security’s or CCS’s twenty (20) critical security controls (CSC 1 to CSC 20)or areas that define what such a program must address.  The CSC information assurance program is implemented to cover the first 9 critical security controls (CSC 1 to CSC 9) and ultimately ensure that the company’s and its clients’ data and information assets are sufficiently available and accessible on demand; their integrity is sufficiently sound; authenticity is validated and verifiable; privacy and confidentiality are ever upheld; and their origin can be verified. This project has documented the entire process of implementing an information assurance program covering the following 20 security controls, which are proposed by the Council on CyberSecurity (2014):

  1. Inventory of Authorized & Unauthorized Devices (CSC 1)
  2. Inventory of Authorized & Unauthorized Software (CSC 2)
  3. Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers (CSC 3)
  4. Continuous Vulnerability Assessment & Remediation (CSC 4)
  5. Malware Defenses (CSC 5)
  6. Application Software Security (CSC 6)
  7. Wireless Access Control (CSC 7)
  8. Data Recovery Capability (CSC 8)
  9. Security Skills Assessment & Appropriate Training to Fill Gaps (CSC 9)
  10. Secure Configurations for Network Devices such as Firewalls, Routers, and Switches (CSC 10)
  11. Limitation and Control of Network Ports, Protocols and Services (CSC 11)
  12. Controlled Use of Administration Privileges (CSC 12)
  13. Boundary Defense (CSC 13)
  14. Maintenance, Monitoring & Analysis of Audit Logs (CSC 14)
  15. Controlled Access Based on the Need to Know (CSC 15)
  16. Account Monitoring & Control (CSC 16)
  17. Data Protection (CSC 17)
  18. Incident Response and Management (CSC 18)
  19. Secure Network Engineering (CSC 19)
  20. Penetration Tests and Red Team Exercises (CSC 20)

These controls are aimed at addressing risks that surround the company’s systems and data in the current environment and its wide array of products, services and processes.

2.0 Review of other work

2.1 What is information assurance?

According to Willett (2008), information assurance describes and utilizes a collection of approaches, norms, systems, practices, and procedures to preserve work integrity regarding individuals, processes, technology, data, information and backup infrastructure. Information assurance is the collection of actions and/or processes that are aimed at protecting information systems and data through guaranteed availability, trustworthiness, verification, secrecy, and non-repudiation. These actions and/or processes incorporate planning for recovery of information systems by including assurance, detection, and response capacities. Information assurance ensures the following elements are preserved: privacy, integrity, accessibility, ownership, utility, validity, accountability, non-repudiation, approved use and confidentiality of information in IT systems as well as the ones in transit (Birchall, Ezingeard, McFadzean, Howlin & Yoxall, 2004).

Information assurance can be thought of as the approach to guaranteeing information management in the following processes: access, usage, processing, transmission, and storage. In addition, information assurance protects IT systems and processes that are used for different business purposes (Blyth & Kovacich, 2006). Information theft or misuse may be caused by hackers, disgruntled insiders, former employees, or malware with an intention of revenge, extortion, surveillance, or sabotage. Bishop (2003) argues that information assurance specialists are tasked with creation of robust systems and procedures that can effectively prevent breaches to IT systems security and/or recover fully and rapidly after an attack. So, what is the difference between information security and information assurance? According to Birchall et al. (2004), information security entails a set of components and procedures, where information assurance is an entity within the latter. As such, information assurance and information security are independent of each other, and specialists in the two areas work closely for optimal security results. Failure to build a concrete working relationship may lead to serious vulnerabilities.

Threats to information assets have changed as cybercriminals become more knowledgeable and devise better attack tools, thus improving their capability to launch attacks. The mobility of IT systems users is also at an all-time high, especially with adoption of Bring-Your-Own-Device (BYOD) practices and tele-working. These trends increase the possibility of execution of successful attacks. Information security experts have recognized the need to work as a community when dealing with threats and ultimately deliver the best results to the entire industry (CCS, 2014b).

2.2 Human factor as vulnerability

The world has reached an intriguing period with regard to development of data protection or cybersecurity. Today, it is almost a daily occurrence to hear of huge data losses, intellectual property theft, identity theft, credit card fraud, DoS, and malware attacks. Consequently, laws, regulations, rules, orders, and other compliance requirements concerned with information security and privacy have been set. However, this has led to increased documentation (Tipper, Krishnamurthy, & Joshi, 2008). Users and other stakeholders must have a solid understanding of relevant parts of IT security documentation to ensure they exercise their responsibility in information assurance.  Many information systems audit reports, and IT security journals and research papers have recorded that IT security experts agree that users contribute significantly to vulnerability in information systems and networks. Therefore, there is need to consider the human element as opposed to singularly concentrating on technological implementations. This way, an organization is able to provide a sufficient and better level of IT security (Tipper et al., 2008).

Today, organizations confront the undertaking of actualizing information assurance and data protection efforts to comply with an extensive range of regulations and requirements. It is now becoming common for storage managers and professionals to be requested to deal with components of information assurance, presented as a security checklist. However, placing information assurance in the hands of persons without basic knowledge in the field through a security checklist leads to failure. The focus of information assurance is about guaranteeing that approved users can access authorized information within the authorized timelines (Willett, 2008).

An organization complies with various laws and regulations, guidelines, best practices, and standards by tailoring its information assurance to a team-based and collaborative process. Sharing of vital information about security concerns among users and IT personnel is an integral part of successful information assurance process. When the whole collection of information assurance components and processes are functional, and roles and responsibilities among different stakeholders understood, the overall security risk to enterprise information is considerably reduced (CCS, 2014b).

2.3 The 20 critical security controls

According to the CCS (2014a), the 20 critical security controls include: Inventory of Authorized & Unauthorized Devices; Inventory of Authorized & Unauthorized Software; Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers; Continuous Vulnerability Assessment & Remediation; Malware Defenses;  Application Software Security; Wireless Access Control; Data Recovery Capability; Security Skills Assessment & Appropriate Training to Fill Gaps; Secure Configurations for Network Devices such as Firewalls, Routers, and Switches; Limitation and Control of Network Ports, Protocols and Services; Controlled Use of Administration Privileges; Boundary Defense; Maintenance, Monitoring & Analysis of Audit Logs; Controlled Access Based on the Need to Know; Account Monitoring & Control; Data Protection; Incident Response and Management; Secure Network Engineering; and Penetration Tests and Red Team Exercises. The demand for private corporations and governmental agencies to have a streamlined process of information assurance for their IT infrastructure led to the development of these controls (CCS, 2014b).

The 20 critical security controls for effective cyber defense are industry standard guidelines for effective information assurance. In 2008, the National Security Agency was requested to come up with controls that would help government agencies and firms in the United States Defense Industrial Base deal with increasing cases of data losses and hacking (CCS, 2014b). NSA published the first draft with 20 controls in mid-2009, and it was given to hundreds of Information technology and security organizations for further assessment and critique, and more than 50 organizations responded with numerous comments on the document (CCS, 2014b). Strikingly, the organizations overwhelmingly supported the idea of a standardized array of security controls. A consortium involving several government agencies and other organizations was formed to further refine these controls. Information on attacks was reviewed at an interval of 6 to 12 months so as to ensure that potential methods of dealing with security risks were continuously included (CCS, 2014b).

The U.S. Department of State validated the published 20 security controls by evaluating whether attacks that happened in 2009 against different federal agencies were included in these controls (CCS, 2014b). Then, a program focused on actualizing automated capacities to implement these key controls and give daily protection status data to each system administrator within the state department was launched. The state department quickly attained a decrease of more than 88% in exposure-based risks in over 85,000 state departments’ IT systems (CCS, 2014b). Based on the success of this project, it has seen a wide application as a model for both public and private organizations.

Additionally in 2011, the United Kingdom embraced the 20 Critical Controls as the standard for information assurance within its government organizations and critical industries (CPNI, 2015). The Council on Cyber Security was tasked with supporting these controls, and to promote their adoption and usage globally in 2013 (CCS, 2014b). As an autonomous, global charitable organization, Council on Cyber Security works closely with IT security experts globally to detect, approve, advance and maintain information assurance best practices and standards, especially within the 20 critical controls framework (CCS, 2014a). This global collaboration guarantees that these security controls are always streamlined, relevant and current, along with universal acceptance.  As such, these controls can be implemented and scaled accordingly to uphold compliance with all government and industry-specific security requirements. The 20 security controls concentrate on various technical measures and actions, with the main objective of supporting organizations prioritize their mitigation techniques against the day-to-day IT networks and systems attacks (Centre for Protection of National Infrastructure [CPNI], 2015).

3.0 Rationale

Today, CSC works in three extensive service fields and/or areas: the public sector, including major federal agencies; IT managed services; and IT business solutions and services. In the public sector, CSC is among the main IT solutions and services supplier for the U.S. Federal government. The corporation is engaged in providing IT services to the U.S. Department of Defense, Federal Bureaus of Investigations (FBI), Central Intelligence Agency (CIA), Homeland Security, and National Aeronautics and Space Administration (NASA) (CSC, 2015). It is worth noting that federal contracts account for a large percentage of CSC’s revenues.  As a major federal government contractor and leading IT services and solution provider for big companies across the world, it is critical for CSC to secure its information systems and data.

Most strikingly, there is concern that hackers and foreign intelligence operatives with an aim of using the company’s networks to break into its client’s information systems especially federal systems may target CSC. The issue is further amplified by the Edward Snowden scandal in addition to the 2009’s attack on defense contractors by hackers suspected to be based in China (Zetter, 2010). CSC must guarantee that its information systems comply with security and privacy laws, government and industry regulations, industry standards, procedures, and guidelines.

Each government agency is required by law or through presidential directives to implement an information assurance program. Therefore, most of these agencies have issued guidelines for their information assurance process. Amplifying the effect of the information assurance laws, orders, guidelines, and agency particular methods are required and every company that has access or links to government information systems must have an information assurance program to ensure compliance (Willett, 2008).

In addition, in our drastically changing IT environment and business operation processes, it is a fundamental requirement that companies keep on developing and enhancing their procedures and strategies for reducing security vulnerabilities (Tipper et al., 2008). Therefore, the information assurance program is an essential tool to offer the most current techniques and policies aimed at fostering greater security among CSC’s enterprise IT systems, networks and data. In addition, the 20 controls provided by industry experts affiliated to the Council on CyberSecurity are critical to provide CSC with a better capacity to operate reliably, safely and securely in the cyberspace.  According to CCS (2014b), these controls are critical as they provide reliable recommendations for digital defense that offer particular actionable approaches to stop the day-to-day most pervasive security incidents. Moreover, they are developed and maintained by an expansive volunteer group of security specialists to provide effective and important practices for any company.

 The information assurance program is a vital component to help CSC manage its information systems’ risk concerns and ensure that the data asset is always protected in an effort to uphold its confidentiality, integrity, and availability. The program acts as a guarantee that CSC’s data, information systems and networks are ever available to their authorized users on demand to help them achieve their daily objectives. Information stored in CSC systems and networks may be attacked by cybercriminals and/or disgruntled insiders from various sources and in different ways. Security attacks range from the following: injection of malicious code into information systems, loss of computers or other computing devices with sensitive business data, illegal access level escalation by dissatisfied workers, intellectual property theft, website defacement, phishing, social engineering, malware propagation, Denial of Service (DoS), man-in-the-middle (MITM) attack to complex digital terrorism executed by specialized groups or foreign spies. Information security experts proposes that cyber-attacks have assumed  a destructive and complicated form, making governments and other cyber stakeholders globally to form specialized efforts aimed at handling this issue as well as fight the risk of digital terrorism (Willett, 2008).

The possibility of cyber-attacks that may lead to human fatalities and huge financial costs is dazzling. The potential consequences are so real that if the governmental and organizational IT systems are exposed to cybercriminals in any way, a catastrophe may happen. Global cybercriminals are progressively focusing on security vulnerabilities in information systems and networks to launch attacks. Consequently, many information system experts are channeling their financial resources and expertise in an effort to uncover existing and potential vulnerabilities in IT systems (Schou & Shoemaker, 2006).

Today, there is easy access to advanced tools that can expose IT systems vulnerabilities without the need for deep technical knowledge about such weaknesses, leading to issues related to putting organizations and governments at an increasingly higher risk (Schou & Shoemaker, 2006). Among the key CSC vulnerability areas that the information assurance program has effectively covered include:

  • Social engineering: attacks where unsuspecting users are successfully tricked into disclosing important personal details or other sensitive information, for example, systems login credentials to persons or applications that poses as genuine (Blyth & Kovacich, 2006).
  • Distributed Denial of Service (DDoS): attacks that are planned to overpower information systems or enterprise networks, rendering users incapable of performing their intended tasks. This may have a serious implication to customer services as normal business operations and procedures may be totally disrupted (Blyth & Kovacich, 2006).
  • Malicious codes inclusion and malware propagation: malicious nodes may be inserted to gather enterprise-critical data or cause some damage. On the other hand, malware applications are mainly developed and deployed into organizational IT infrastructure with an intention of destabilizing the infrastructure or capturing data (Bishop, 2003).

The critical security controls proposed by the Council on Cyber Security resulted from mutual knowledge and recognition of real security attacks and effective defense measures implemented by different IT security experts (CPNI, 2015). This way, it is important to acknowledge the crucial role played by these control measures in creating a defense platform covering the entire CSC ecosystem (organizations, IT infrastructure, data and users)alongside every action conducted within the environment (response to security threats, threat analysis, IT systems and services usage, security policy making and IT systems auditing). The information assurance program guarantees that the implementation of the 20 critical security controls provide the most effective and efficient set of tools for helping CSC detect, respond, prevent, mitigate against and remedy any damages resulting from attacks, from the simplest to the most complex. Other than identifying and stopping looming systems exposure, the security controls are also important for identifying IT components that have already been attacked and/or exposed and signaling effective remediation procedures. As such, the defense measures are crucial in helping CSC reduce the probability of suffering from successful initial attacks. This can be effected by performing device and software hardening to identify any vulnerability, for example, exploitable default configurations, in order to deal with long-term security threats within CSC’s information systems and networks, and develop a solid and persistent defense capacity (CCS, 2014b).

4.0 Systems analysis

The 20 critical security controls stipulated under the information assurance program allows CSC to prioritize and eventually focus its resources (personnel, finances, equipment and time) on the most effective and efficient defense efforts as part of the complete compliance with organizational and regulatory policies.

The critical controls can be implemented as an additional layer to already implemented security measures.  This is due to the fact that CSC has a couple of security measures in place prior to introduction of these 20 critical security controls. Therefore, it is important to conduct an extensive assessment of the existing security implementations to eliminate issues regarding duplication or serious omissions. Notwithstanding, despite the significance of conventional risk management frameworks, the 20 critical security controls acts as an initial risk assessment for application in quick and high-value action, which is consistent with standard risk management systems, upholding compliance with other regulatory conventions (CCS, 2014b).

CSC is expected to use the information assurance program based on the aforementioned security controls in facilitating compliance with and leveraging the power of existing best practices and standards. These are stipulated by Security Automation Program (SCAP), International Electrotechnical Commission (IEC), and the International Standards Organization (ISO) and others (CPNI, 2015). Nevertheless, the security controls under the assurance program cannot be leveraged without conducting proper systems investigation as seen earlier; therefore, an analysis of existing information systems is done to understand particular steps and/or infrastructural requirements that are necessary for successful implementation of CSC’s information assurance program. This forms the foundation for determining which controls are to be implemented; in which order; and how so as to spell an integrated IT infrastructure security system. Among the critical security controls, specifically CSC 1 to CSC 5 acts as the anchors of successful information security measures, thus the need to implement them first. A case study is the prioritization of the first 5 security controls at the DHS Continuous Diagnostic and Mitigation (CDM) Program (CCS, 2014b).

To effectively prevent attacks, CCS (2014b) proposes the following five key quick wins: application whitelisting, CSC 2; implementation of standard and secure configurations, CS 3, application software patching every 48 hours, CSC 4; system software patching every 48 hours, CSC 4; and assigning of administrative rights to a small number of users, CSC 3. Therefore, they have been prioritized in the course of development of CSC’s information assurance program.

5.0 Project goals and objectives

The principal goal of this project was to implement an information assurance program based on the Council on CyberSecurity’s 20 critical security controls to ensure that the enterprise networks and IT systems at CSC are secure from internal and external threats. More precisely, the project seeks to ensure that the information assets residing in the firm’s IT infrastructure is always available to authenticated and authorized users, coupled with correctness and confidentiality. The other goal was to ensure that users of CSC’s IT resources understand the information assurance measures implemented in the company through training and awareness programs. This way, IT users have a deep sense of ownership and the firm has a high potential to achieve its desired security capacity through a corporate-wide effort. Additionally, the information systems assurance program was aimed at ensuring that CSC’s networks and information systems comply with established privacy and security laws, regulations, directives, and best practices.

To achieve these project goals, the following objectives were pursued:

  1. Creation of a list of the whole set of CSC’s authorized and unauthorized devices.
  2. Creation of a list of all authorized and unauthorized software at CSC.
  3. Creation of a plan to conduct secure configurations on all CSC hardware and software.
  4. Creation of a plan to execute continuous assessment of vulnerability of CSC networks and information systems, alongside potential remediation approaches.
  5. Creation of a plan to implement effective malware defense.
  6. Creation of a plan to secure CSC hardware and software applications.
  7. Creation of a plan to implement secure wireless access control.
  8. Creation of a plan to implement  CSC data recovery capacity
  9. Creation of a plan to assess security skills and conduct requirement training.

6.0 Project timeline

Project Deliverable or MilestoneDuration
An inventory of the whole set of CSC’s authorized and unauthorized devices. 1
An inventory of all authorized and unauthorized software at CSC. 1
Documented secure configurations for all CSC hardware and software. 2
Continuous assessment results of vulnerability of CSC networks and information systems, alongside potential remediation approaches. 1
Effective malware security implementations. 2
Secured application software. 2
Secured wireless access control. 2
CSC data recovery system 2
Security skills gaps and effective training program. 1

7.0 Project development

The five fundamental principles with respect to implementation of an information assurance program at CSC according to the CCS (2014b) are:

However, the ultimate goal is to defend: identify, analyze and mitigate against threats to enterprise networks and IT systems. Vulnerability assessment is a key procedure with regard to defense implementation. The value of intelligence collection can be leveraged through effective cyber threat and exploitation analysis to help build successful defense measures.

7.1 Project execution

7.1.1 Inventory of Authorized and Unauthorized Devices

CSC 1 requires an active management, involving inventorying and tracking of the entire collection of hardware devices residing in the company’s network. This ensures that the enterprise network gives access to only authorized devices, while unauthorized ones are discovered and denied access (CCS, 2014b).

This is critical for a number of reasons. First, cybercriminals located from any part of the world are consistently scanning the target network’s IP address space for new and/or insecure devices to be connected.  Attackers also search for portable devices that are typically highly likely to be unpatched. As new innovation keeps on maturing, for example, BYOD, the possibility for security compromise increases as the internal network can be easily attacked. Inventory management of devices also help plan and conduct systems and data backup and recovery (CCS, 2014b).

How is this control implemented? According to Bishop (2003), implementation of an automated device discovery tool to inventory an asset list connected to the firm’s network. Dynamic host configuration protocol (DHCP) may also be deployed to log all network devices. Important information to maintain include: IP addresses, device name, purpose, owner, and associated department for devices such as desktops, laptops, routers, switches, Aps, servers, printers, storage devices, telephones, and others. Potential configurations include: 802.1x network-level authentication, network access control (NAC), and client certificates for systems validation and verification (CCS, 2014b).

7.1.2 Inventory of Authorized and Unauthorized Software

CSC 2 requires an Active management through inventorying and tracking all enterprise software to ensure that only authorized ones are installed and executed, and the rest is prevented from the network (CCS, 2014b). 

Attackers constantly scan target firm’s searching for vulnerable software for remote exploitation. Attackers can then gain control of compromised systems or launch an organization-wide attack using unauthorized access to a critical system. Zero-day exploits may also be used, especially when attackers are aware that a critical patch is yet to be released. Therefore, there is need to have a complete knowledge of software installed and running in an organizational network to properly secure them (CCS, 2014b).

Implementing this control requires deployment of application whistleblowing tools to all execution of only those software systems that are listed. Perform constant scanning to identify unauthorized software and ensure that they are properly patched. Authorized software should be identified by types, version numbers, patch levels, and device names where they are installed. File types such as .zip and .exe should also be closely controlled (CCS, 2014b).

7.1.3 Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers

This entails establishing, implementing,  and actively managing the security configuration of desktops, laptops, servers, workstations, and smart devices using a solid configuration management process so as to prevent cybercriminals from exploiting potential vulnerabilities in services and/or settings (CCS, 2014b).

Manufacturers deliver hardware and software systems with default configurations normally to uphold simplicity in deployment and usability, thus there is little consideration for security in new systems (CCS, 2014b).It is worth noting that basic controls, default authentication credentials, open ports and services, vulnerable protocols, pre-installed unnecessary software contribute to tangible exploitable loopholes due to the fact that attackers are well-versed with such default states. Creating configurations with solid security measures requires an analysis of conceivably a large number of options to make the best choices. Regardless of the fact that strong initial configurations may be done, they must be consistently managed to ensure that patches and updates do not introduce unforeseen security vulnerabilities (Birchall et al., 2004).

The company must ensure that it uses standard, secure systems configurations. Device and software hardening are also crucial for successful implementation of this control, since it enables remove unnecessary accounts, disable and remove unnecessary ports and services, apply patches and updates, and configure non-executable heaps. Other implementation procedures may entail intrusion detection and prevention systems and firewalls; adherence to minimal administrative privileges; following strict configuration standards; remotely administer network hardware and software over strictly secure channels, for example, IPsec and SSL (Tipper et al., 2008).

7.1.4 Continuous Vulnerability Assessment and Remediation

This control demands continuous acquisition, evaluation, and taking proper action on information at hand to discover vulnerabilities, conduct remediation procedure, and reduce possibilities for successful attacks (CCS, 2014b).

IT security personnel must work with consistent new information about vulnerabilities and potential remediation procedures: updates and patches, threat news, security advisories, practical remediation, and so on. Understanding and remedying vulnerabilities is a regular activity, thus demanding numerous resources with respect to time, human resourcing and money. Normally, attackers are continuously in reach of security experts’ information, thus they may successful exploit gaps that exist between specific vulnerabilities and remediation prior to creation of a defense solution by vendors and cyber defenders. Failure to scan for potential vulnerabilities may result to late response and/or serious security incidents (CCS, 2014b).

All hardware and software should be regularly auto-scanned for vulnerabilities, while assigning risk scores to prioritize remediation. Automated patch and update management tools should also be installed to properly air gap all systems (CPNI, 2015). 

7.1.5 Malware Defenses

The objective of this control is to monitor the installation, execution, and propagation of malware at different points in the company’s IT networks and systems. Automation may be optimized to boost rapid updates to security, data capturing, and implementation of remediation action (CPNI, 2015).

Malware constitutes a critical element of cyber threats, and may be designed and developed to attack the company’s information systems, data or even the entire network.  Malware programs are even more severe in case they are designed to disable or avoid security measures in place. Attackers are always deploying malicious codes, thus it is highly necessary to implement proactive defense measures (Tipper et al. 2008). The control is actualized by:

  • Continuously monitoring all hardware and software systems using tools such anti-virus software, anti-spyware programs, and, host-based intrusion prevention systems, and firewalls.  These tools should be behavior-based to ensure they can detect well-known signatures.
  • Configuring devices to prevent auto-run of content from external drives, such as USB hard drives, flash disks, and memory cards.
  • Setting configurations for automatically scanning all external drives for malware.
  • Scanning all emails and attachments coming through the company’s gateway to weed out any malicious software.
  • Limiting usage of removable drives and external devices on the company network.
  • Implementing a DNS query to detect and block malicious domains.

7.1.6 Application Software Security

The key objective of this control is to manage enterprise software security lifecycle to identify, correct, and prevent potential weaknesses (CCS, 2014b).

Frequently, attackers exploit vulnerabilities that exist in application software, including web-based applications. Such vulnerabilities are present due to the following key reasons: coding mistakes, incomplete security requirements, logic errors, and poor security testing. SQL injection is a typical attack that can be launched by exploiting such software inefficiencies (CPNI, 2015).

According to CCS (2014b), actionable solutions include: ensuring that software in use for business purposes is still vendor supported and properly patched with security updates; and protecting web-based applications using firewalls that inspect traffic for suspect attacks such as cross-site scripting, and SQL injection. In-house developed application software must be scanned for errors and common security vulnerabilities. All software assets should also be properly hardened.

7.1.7 Wireless Access Control

CSC 7 is mainly aimed at creating solid procedures and tools for use in tracking, controlling, and correction of security issues surrounding wireless local area networks (WLANS), and associated access points (APs), and other wireless systems (CCS, 2014b).

Schou & Shoemaker (2006) argues that serious data thefts have been launched by gaining access to WLANS remotely, defeating enterprise security measures to reach APs residing within an internal network. This way, wireless clients may be easily targeted and successfully attacked. In more serious incidents, such exploited devices may be used as backdoor to reconnect to the enterprise network. For example, unauthorized APs may be installed and hidden from internal personnel to consistently steal sensitive data or generate huge traffic aimed at effecting DoS attacks. The fact that WLANs do not operate with physical links makes it a convenient attack vector for cybercriminals to access a target network (CCS, 2014b).

It is important to ensure that all devices connected wireless to the CSC’s network are matched to an approved security profile for authorized access.  In addition, it is important to install and configure proper WLAN vulnerability scanning software to identify wireless APs based on a checklist of authorized ones. This way, rogue APs can be easily detected and deactivated. There are also WLAN intrusion detection systems (WIDS) that can be implemented to detect unauthorized wireless clients, attack attempts, successful security breaches, and vulnerabilities exploited. WIDs may also be used for monitoring traffic in order to identify potentially malicious codes. Devices that are not intended to be used for wireless business purposes should have their wireless access disabled. Wireless traffic should leverage a form of encryption, for example, Advanced Encryption Standard (AES) encryption and WiFi Protected Access 2 (WPA2) at minimum. WLANS should also use solid authentication protocols, for example, Extensible Authentication Protocol-Transport Layer Security (EAP/TLS) to provide mutual authentication. All peer-to-peer WLAN capabilities should be disabled unless they are intended for a specific business need. Another key security measure is to create independent virtual local area networks (VLANs) to cater for the needs of untrusted devices, for example, BYOD devices (CCS, 2014b).

7.1.8 Data Recovery Capability

This control defines the procedures and tools applied to achieve proper back-up and rapid recovery of critical information based on a proven approach.  In the event that attackers compromise IT systems, they often perform many changes to hardware and software configurations, and disrupt business operations. Sometimes, attackers alter data residing in compromised systems, thus damaging business effectiveness with incorrect information. Failure to have an effective data recovery capacity to eliminate any prospects of further attack and attacker’s presence may render CSC unable to recover on time and keep its business running (CCS, 2014b).

How should CSC ensure it has an effective data recovery capability? All information systems must be regularly backed up to ensure sensitive information is available in case a security incident strikes. To help guarantee the capacity to quickly restore an IT system and/or data from the backup, the system software, application software in addition to data on a device must be incorporated into the company’s backup plan (CPNI, 2015). These three elements of an organizational IT system should be backed up in different locations, over time to ensure that there is no single point of failure. It is important to test any backed up software or data to ensure that the media involved is properly operational and successful restoration is possible. Since backups are very crucial for business continuity, they should be protected through encryption and physically, both in storage and during transmission. Cloud services and remotely located backups are perfect ways of implementing such security (CCS, 2014b).

All devices and software in the company should be managed and regulated by an effective control system. In an examination of a company’s capacity to effectively restore systems and data in the event of security breach, Figure 1 is a perfect example of a data recoverability framework. All the entities must work together in order to address the business goal of systems and data recoverability. In Figure 1, it is evident that business production systems are backed up regularly to well-managed backup systems, which are securely stored in offline facilities (CCS, 2014b).

Figure 1: Software and data recoverability framework (CCS, 2014b)

7.1.9 Security Skills Assessment and Appropriate Training to Fill Gaps

This control is also very critical for CSC Company. All the functional roles and responsibilities in the company, prioritizing those which are mission -critical and sensitive to business continuity should be matched with specific knowledge, abilities, and skills. As such, the security of the business is upheld. Additionally, an organization must seek to ensure that it create and execute a streamlined plan for assessing, identifying gaps, and remediating security issues through proper strategies, planning, and training and awareness efforts (CCS, 2014b; CPNI, 2015).

To a greater extent, a firm may tend to consider cybersecurity as a fundamentally technical challenge. However, the activities of individuals additionally have basic impact on the overall success of a security cause. Individuals perform critical actions at each phase of system development, including design, implementation, use, and maintenance. Cases include: IT operations, whereby individual may be unaware of security ramifications of various IT logs and such; the activities of IT users, which can easily lead to issues related to social engineering and phishing; security personnel, who are challenged by the huge volume of information associated with IT security; IT systems developers, who may fail to resolve the exact vulnerabilities early enough in the development life-cycle; and senior management and project sponsors, who

face challenges in quantifying the role played by cybersecurity  in the overall operational risk, thus they cannot see any reasonable concern to invest in the proposed decisions (CPNI, 2015). 

Unfortunately, attackers are aware of these human issues, thus they apply them in planning for exploitations. For example, attackers painstakingly make phishing messages to look like anticipated and normal traffic to unsuspecting users by taking advantage of the loopholes that may be there between technology and security policies. More specifically, attackers exploit policies that are not accompanied by a concrete technical enforcement. In addition, attackers may work within the window of scheduled log audit or patching to ensure they successfully execute their security breach mission.  Therefore, there is no cybersecurity technique that can address the IT systems risk without means to empower people with better cyber defense behaviors to fundamentally increase awareness and readiness (CCS, 2014b).

This control should be properly implemented if CSC wishes to ensure that the technical security tools already in place are effective. The company should conduct gap assessment to discover the skills system users need as well as behaviors that are not followed. The resultant information may be used to create a baseline and roadmap for employee training and awareness. Training must be delivered to fill any identified skill gaps. Senior staff are the best suit personnel to offer such training as they have the desired command across the workforce. Alternatively, external teachers may be outsource to provide onsite training, where scenarios will be effective and relevant. In case there are few people to be trained, conference and online platforms should be used. Security training and awareness programs should have the following characteristics: focus on key intrusion areas; be convenient for trainees; updated; and should be monitored for completion by specific employees (CCS, 2014b; Tipper et al., 2008). 

Awareness levels should be validated and improved through regular tests in an effort to assess whether users have the relevant skills or they adhere to acceptable behavior. For example, do employees still open suspicious emails? It is also important to use the security aptitudes to identify further mission-critical skills gaps. For such critical security elements, it is important to use hands-on and real-world cases to train and assess mastery. Simulations may also be used for instances that may prove expensive to train through hands-on techniques or for situations that may have significant negative impacts on business production (Tipper et al., 2008).

7.2 Problems encountered

It was not possible to expansively assess the current security implementations at CSC to devise an information assurance program specific to the company’s requirements. Luckily, the Council on CyberSecurity’s 20 critical security controls have been properly explored to ensure that any existing security requirements at an organization such as CSC are addressed. The Council on CyberSecurity has also extensively documented the 20 security controls for implementation in a defense framework, thus it forms a platform to model the security needs for an organization. In addition, the researcher discovered that, there are security incidents such as the one involving Edward Snowden, a former NSA and CIA system administrator who disclosed government secrecy and national security issues to the media in 2013 (Cole  & Brunker, 2014). It is also evident that NSA and CIA are classical examples of federal agencies which contract CSC to implement IT projects; therefore, this solidifies the general assumption that the company can be involved in potentially exploitable IT contracts that require security controls documented in this security capstone research project.

7.3 Changes to original plan

The original plan sought to cover all the 20 critical security controls proposed by the Council on CyberSecurity. However, this was not possible due to time constraints, leading to concentration on the first nine controls (CSC 1 to CSC 9). Nevertheless, the security controls that are covered in this project are adequate for implementation of an information security program at CSC.

7.4 Unanticipated requirements or components that needed to be resolved

It was discovered that the actual security implementations at CSC’s enterprise networks and its other IT implementation projects it has engaged in may have required physical assessment. To remedy this, the project has concentrated on specifically CSC 1 to CSC 5 which acts as the anchors of successful information security measures. It has further covered the subsequent 4 security controls to address concerns related to software security, wireless access security, systems and data recoverability, and most importantly, security training.

7.5 Actual and Potential effects of this project

The information assurance program is an essential tool to help CSC protect its networks, information systems and data against the widely growing cyber threats, and consequently enjoy smooth business operations while complying with data security and privacy laws and regulations. The program will help achieve this by upholding high levels of confidentiality, availability, integrity, authentication, authorization, non-repudiation, and accountability. 

Information assurance offers a concrete platform for the end-to-end visibility in information creation, processing, transmission, and storage. The implemented security controls, more specifically continuous vulnerability assessment provides an organization with the capability to ensure that information and systems residing in an enterprise network are adequately monitored for potential malicious activity and remedied (Birchall et al., 2004). Through this robust information assurance program, the end-to-end IT visibility will be realized regardless of the number or types of computing devices in the network, for example, desktops, laptops, and servers. It will also ensure that IT systems have sufficiently functional data and information management based on proven security controls.

Strikingly, unauthorized systems and users are effectively denied any access to the company’s IT infrastructure. As a result, the information assurance program will prevent cybercriminals from executing exploitations such as hacking and malware propagation. Most importantly, the company will be in a position to rapidly recover from security attacks, thus there will be no business disruption (CCS, 2014b). 

7.6 Success and effectiveness of the project

It should be noted that the covered controls are adequate for application at CSC’s IT infrastructure as they address critical areas that an information assurance program should meet, including hardware and software inventory to prevent unauthorized access; secure hardware and software configuration, involving aspects of hardening; continuous systems vulnerability assessment coupled with appropriate remediation procedures; defense against malware; application software defense, WLAN control; systems and data recovery in case of a security incident to ensure business operation is not disrupted; and security skills analysis and proper training and awareness to fill discovered gaps.

8.0 Additional deliverables

Appendix A: Project Timeline

 

Appendix B: Primary attack types

The following table represents the attack types that were primarily considered when creating the information assurance program. 

Attack TypeAssociated Security Control
Attackers consistently scan for exploitable hardware and software to launch security breaches1, 2
Attackers try to exploit unpatched or improperly secured application software2, 3, 4
Attackers exploit default systems configurations3
Exploitation of new systems vulnerabilities, may be through zero-day attacks for lack of continuous vulnerability analysis4,5
Malicious code propagation5
Remotely launched attacks7
Exploitation of poorly secured application software6
Social engineering and phishing9

 

 

 

9.0 References

Birchall, D., Ezingeard, N., McFadzean, E., Howlin, N., & Yoxall, D. (2004). Information

assurance: Strategic alignment and competitive advantage. Grist Ltd.

Bishop, M. (2003). Computer Security: Art and Science. Addison-Wesley Professional.

Blyth, A., & Kovacich, G.L. (2006). Information Assurance: Security in the Information Environment. Springer Science & Business Media.

Cole, M., & Brunker, M. (2014, May 26). Edward Snowden: A Timeline. NBC News. Retrieved from http://www.nbcnews.com/feature/edward-snowden-interview/edward-snowden-timeline-n114871

Computer Sciences Corporation. (2015). Company Profile. Computer Sciences Corporation. Retrieved from Computer Sciences Corporation: http://www.csc.com/about_us/ds/29505-company_profile

Council on CyberSecurity. (2014a). Critical Security Controls. Council on CyberSecurity. Retrieved from Council on Cyber Security: http://www.counciloncybersecurity.org/critical-controls/

Council on CyberSecurity. (2014b). The Critical Security Controls for Effective Cyber Defense. Council on CyberSecurity. Retrieved from http://www.cisecurity.org/documents/CSC-MASTER-VER5.1-10.7.2014.pdf

Centre for Protection of National Infrastructure. (2015). Critical Security Controls guidance. Centre for Protection of National Infrastructure. Retrieved from http://www.cpni.gov.uk/advice/cyber/Critical-controls/

Grant, T. (2004). International Directory Of Company Histories. Detroit, Michigan: St. James Press. Retrieved from Funding Universe.

Schou, C., & Shoemaker, D. (2006). Information Assurance for the Enterprise: A Roadmap

to Information Security. McGraw-Hill Education.

Tipper, D., Krishnamurthy, P., & Joshi, J. (2008). Information Assurance. New York: Elsevier Science & Technology.

Willett, K. D. (2008). Information Assurance Architecture. CRC Press.

Zetter, K. (2010, January 14). Google Hack Attack Was Ultra Sophisticated, New Details Show. WIRED. Retrieved from http://www.wired.com/2010/01/operation-aurora/