Cloud Computing Paradigms

Cloud Computing offers three paradigms and three delivery models. These three paradigms offer differing levels of abstraction but all are delivered on a utility-based, pay-as-you-go basis. The three delivery models encompass public data centres, private data centres or a mixture of both.

Infrastructure-as-a-Service: At its most basic level Infrastructure-as-a-Service (IaaS) delivers virtualisation and scaling of servers. Virtualised operating system images can be uploaded to a cloud provider who, in turn, provides placement and execution of these images on physical hardware within their data centres. In this model, application software is intrinsically linked to the underlying operating system. Customers are responsible for building their own software and the management of the underlying operating system on which it executes. This includes patch management of the operating system, but customers are not usually responsible for physical hardware, networking or storage. Payment models vary but are typically based on resource usage, such as compute, storage and ingress and egress transactions.

Platform-as-a-Service: Platform-as-a-Service (PaaS) delivers virtualisation and scaling of abstracted software packages above the level of the operating system. Applications are typically uploaded to a cloud platform, although some models require full development on the cloud platform itself. The vendor provisions pre-baked operating systems images on to hardware within their data centres. The customer’s software is then automatically installed on to these images. With this model there is a decoupling of software from the underlying operating system through an application development platform. Software is programmed against this platform and underlying servers have this platform pre-installed. Customers are responsible for building their own software but there is no requirement to manage the underlying operating system nor the physical hardware, networking or storage. Similar to IaaS, payment models vary but are typically based on resource usage, such as compute, storage and data input and output.

Software-as-a-Service: Software-as-a-Service (SaaS) is perhaps the oldest of the ‘as-a-Service’ terms, and refers to fully managed application software delivered as a service. Customers do not need to upload server images or software packages and, instead, rent access to the software which has been created and is maintained by the cloud provider. Rather than pay on a per server per hour basis, charges are often per user, per month. In this paradigm the benefit is that customers are neither responsible for the software nor the hardware.


What are the five common features of Cloud Computing?

According to the NIST (National Institute of Standards and Technology) five common features of Cloud Computing are:
  • On-demand self-service: users can set themselves up without the help of anyone else.
  • Ubiquitous network access: available through standard Internet-enabled devices.
  • Location independent resource pooling: processing and storage demands are balanced across a common infrastructure with no particular resource assigned to any individual user.
  • Rapid elasticity: consumers can increase or decrease capacity at will.
  • Pay per use: consumers are charged fees based on their usage of a combination of computing power, bandwidth use and/or storage.

What is Cloud Computing?

A cloud is a powerful combination of cloud computing, networking, storage, management solutions, and business applications that facilitate a new generation of IT and consumer services. These services are available on demand and are delivered economically without compromising security or functionality.

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources – everything from applications to data centers – over the Internet and on a pay-for-use basis. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). 

 

A cloud can be private or public. A public cloud sells services to anyone on the Internet (Currently, Amazon Web Services is the largest public cloud provider). A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

Infrastructure-as-a-Service like Amazon Web Services provides virtual server instance to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS.

Note: There are no standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the Software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

Cloud computing really is accessing resources and services needed to perform functions with dynamically changing needs. An application or service developer requests access from the cloud rather than a specific endpoint or named resource. What goes on in the cloud manages multiple infrastructures across multiple organizations and consists of one or more frameworks overlaid on top of the infrastructures tying them together. Frameworks provide mechanisms for:

  • self-healing
  • self monitoring
  • resource registration and discovery
  • service level agreement definitions
  • automatic reconfiguration

The cloud is a virtualization of resources that maintains and manages itself. The Assimilator project is a framework that executes across a heterogeneous environment in a local area network providing a local cloud environment.

What is Cloud Computing?


Google Faculty Summit 2009: Cloud Computing


5 Common Causes of Slow Website Performance

1. Unoptimized Images

Sites impacted: 90%
Unoptimized images are images that can be reduced in size without visual impact to your user, also known as “lossless” optimization. Images that are optimized using lossless methods are visually identical to their original images, just stripped of extraneous metadata that helps describe the image (useful to the designer, not needed for the end user).
A few best practices to consider:
  • PNG image files are often needlessly large. This is due to extra data inside the PNG file such as comments or unused palette entries as well as the use of an inefficient DEFLATE compressor. PNG images can be optimized using free tools like pngcrush that reduce the size of the file without changing or reducing the image quality.
  • JPEG image files can also be needlessly large for similar reasons to PNG. By using free tools such as jpegtran you can convert JPEGs into progressively rendered JPEG files to reduce the size of the file without losing image quality.
  • PNG files are best used for icons and logos while JPEG is preferable for photos. Because PNG images support transparency while JPEGs do not, the PNG format is commonly overused for images that are better served as JPEGs. By utilizing JPEG instead, you often can realize file size savings as large as 80%. If possible, consider reworking your design to avoid the use of transparency. Alternatively, you can often append a smaller transparent PNG image alongside the larger JPEG image to achieve the same visual effect at substantial file size savings

2. Content Served Without HTTP Compression

Sites impacted: 72%
Enabling HTTP compression on your webserver can dramatically reduce the size of the downloaded page, significantly improving load time. This is a high impact change, but is not always as easy as it may seem.
For Apache 2.0 and above, load the mod_deflate module to enable compression.

3. Combinable CSS Images

Sites impacted: 69%
Browsers make an individual HTTP request for every background image specified in your CSS pages. It is not uncommon for over half of your total HTTP requests from a single web page to be used for loading background CSS images. By combining related images into a small number of CSS sprites, you can significantly reduce the number of HTTP requests made during your page load.

4. Images Without Caching Information

Sites impacted: 65%
HTTP Caching allows the browser to store a copy of an image for a period of time, preventing the browser from reloading the same image on subsequent page loads and thus dramatically increasing performance. To cache your images, update your webserver configuration to provide an Expires header to your image responses from the server. For images that do not change often, you should specify a “far future” Expires header, typically a date 6 months to a year out from the current date.
Note that even with a far future Expires date, you can always change the image later by modifying the referenced filename using versioning, for example MyImage.png becomes MyImage_v2.png.

5. Domain Sharding Not Implemented

Sites impacted: 64%
Most browsers typically support 2-4 concurrent downloads of static resources for each hostname. Therefore, if your page is loading many static resources from the same hostname, the browser will bottleneck in a stair-step fashion downloading all the content. By splitting resources across 2 different domains, you can effectively double browser throughput downloading the required links.
Note that it may be administratively difficult to physically move your files to different hostnames, so as a clever “trick” you can utilize DNS CNAME records to map different hostnames to the same origin. For example, static1.example.com and static2.example.com could both map to static.example.com, thus prompting the browser to load twice as many links concurrently as before, without requiring you physically move the files on the server.
Of course too much concurrency has its own drawbacks on performance, so testing should be done to find the optimal balance.

Source: http://zoompf.com/2013/04/top-5-causes

Best Practices – Stored Procedures

The following suggestions may improve the performance of a Stored Procedure:
  • Use SET NOCOUNT ON statement at the top of a stored procedure to turn off the messages that SQL Server sends back to the client after each T-SQL statement is executed. This is performed for all SELECT, INSERT, UPDATE, and DELETE statements. It can greatly improve the overall performance of the database and application by removing extra overhead from the network.
  • Use schema names when creating or referencing database objects in the procedure. It will take less processing time for the Database Engine to resolve object names if it does not have to search multiple schemas. It will also prevent permission and access problems caused by a user’s default schema being assigned when objects are created without specifying the schema. To check schema names for all the tables in a database, use the following statement:
    SELECT '[' + SCHEMA_NAME(schema_id) + '].[' + name + ']'
    AS SchemaTable
    FROM sys.tables
  • Avoid wrapping functions around columns specified in the WHERE and JOIN clauses. Doing so makes the columns non-deterministic and prevents the query processor from using indexes.
  • Avoid using scalar functions in SELECT statements that return many rows of data. Because the scalar function must be applied to every row, the resulting behavior is like row-based processing and degrades performance.
  • Avoid the use of SELECT *. Instead, specify the required column names. This can prevent some Database Engine errors that stop stored procedure execution.
  • Avoid processing or returning too much data. Narrow the results as early as possible in the procedure code so that any subsequent operations performed by the procedure are done using the smallest data set possible. Send just the essential data to the client application.
  • Use explicit transactions by using BEGIN/END TRANSACTION and keep transactions as short as possible. Longer transactions mean longer record locking and a greater potential for deadlock.
  • Use the T-SQL TRY…CATCH feature for error handling inside a procedure. TRY…CATCH can encapsulate an entire block of T-SQL statements. This not only creates less performance overhead, it also makes error reporting more accurate with significantly less programming.
  • Use the DEFAULT keyword on all table columns that are referenced by CREATE TABLE or ALTER TABLE T-SQL statements in the body of the stored procedure. This will prevent passing NULL to columns that do not allow null values.
  • Use NULL or NOT NULL for each column in a temporary table. The ANSI_DFLT_ON and ANSI_DFLT_OFF options control the way the Database Engine assigns the NULL or NOT NULL attributes to columns when these attributes are not specified in a CREATE TABLE or ALTER TABLE statement. If a connection executes a procedure with different settings for these options than the connection that created the procedure, the columns of the table created for the second connection can have different nullability and exhibit different behavior. If NULL or NOT NULL is explicitly stated for each column, the temporary tables are created by using the same nullability for all connections that execute the procedure.
  • Use modification statements that convert nulls and include logic that eliminates rows with null values from queries. Be aware that in T-SQL, NULL is not an empty or “nothing” value. It is a placeholder for an unknown value and can cause unexpected behavior, especially when querying for result sets or using AGGREGATE functions.
  • Use the UNION ALL operator instead of the UNION or OR operators, unless there is a specific need for distinct values. The UNION ALL operator requires less processing overhead because duplicates are not filtered out of the result set.

Guess the output of the SQL queries

--Statement 1
CREATE TABLE FirstTable
(
in_id int PRIMARY KEY,
vc_name varchar(20)
CONSTRAINT UN_vc_name UNIQUE(vc_name)
);

--Statement 2
CREATE TABLE SecondTable
(
in_id int PRIMARY KEY,
vc_person_name varchar(10) REFERENCES FirstTable(vc_name)
);

Question: What will be the output of both the SQL queries?

Answer: The SQL Query will execute successfully but the SQL Query will throw an error.

Explanation: Columns participating in a foreign key relationship must be defined with the same length and scale. If not, then the compiler will throw the following error:

Column 'FirstTable.vc_name' is not the same length or scale as referencing column 'SecondTable.vc_person_name' in foreign key 'FK__SecondTab__vc_pe__54F81446'. Columns participating in a foreign key relationship must be defined with the same length and scale.