Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

How to execute SPEC95 benchmarks in Simplescalar?

Let us see how to run the SPEC95 benchmarks (little endian) namely compress95, anagram, go, cc1 and perl in Simplescalar simulator software. Each and every benchmark has a corresponding input file that specifies the input to the program and an output file which we can use to verify if our test was successful. Also we will be writing the output of our simulation to a file with a .out extension and we will be printing the execution trace with all the run statistics to a file with a .trace extension.

The little endian version of the benchmarks and the input and output files can be download from the below location. After extracting the archive,  place the folder BenchMarks_Little in the simplesim-3.0 folder of your simplescalar installation folder. All the benchmarks shall be executed from the simplesim-3.0 directory.

BenchMarks_Little.tar.gz

compress95:

The below command will execute the compress95 benchmark and print the output to compress95.out and log the execution trace to compress95.trace in the Results folder

./sim-outorder BenchMarks_Little/Programs/compress95.ss < BenchMarks_Little/Input/compress95.in  2> BenchMarks_Little/Results/compress95.trace > BenchMarks_Little/Results/compress95.out

go:

The below command will execute the go benchmark and print the output to go.out and log the execution trace to go.trace in the Results folder

./sim-outorder BenchMarks_Little/Programs/go.ss 50 9 BenchMarks_Little/Input/2stone9.in  2> BenchMarks_Little/Results/go.trace > BenchMarks_Little/Results/go.out

anagram:

Before executing the anagram program place the words file from the Input folder in the simplesim-3.0 directory. The below command will execute the anagram benchmark and print the output to anagram.out and log the execution trace to anagram.trace in the Results folder

./sim-outorder BenchMarks_Little/Programs/anagram.ss words < BenchMarks_Little/Input/anagram.in  2> BenchMarks_Little/Results/anagram.trace > BenchMarks_Little/Results/anagram.out

cc1:

The below command will execute the cc1 benchmark and print the output to 1stmt.s in the Programs folder and log the execution trace to cc1.trace in the Results folder

./sim-outorder BenchMarks_Little/Programs/cc1.ss -O BenchMarks_Little/Input/1stmt.i 2> BenchMarks_Little/Results/cc1.trace

perl:

Before executing the perl program place the perl-tests.pl file from the Input folder in the simplesim-3.0 directory. The below command will execute the perl benchmark and print the output to perl.out and log the execution trace to perl.trace in the Results folder

./sim-outorder BenchMarks_Little/Programs/perl.ss < perl-tests.pl 2> BenchMarks_Little/Results/perl.trace > BenchMarks_Little/Results/perl.out

References:

http://www.simplescalar.com/
http://www.igoy.in/simplescalar-installation-made-simple/
http://users.ece.gatech.edu/~hamblen/4100/course/simplescalar/Spec95%20Benchmark%
20Command%20Lines.htm
http://www.spec.org/osg/cpu95/

AppFabric Caching (aka Velocity Caching) - Microsoft .NET

Caching is an essential component to speed up any web application by minimizing the number of trips to the actual database. AppFabric Caching or Velocity caching is a distributed caching system that leverages the caching infrastructure available across a cluster of cache servers efficiently to enhance application performance as well as scalability. The secret behind the effectiveness of appfabric caching lies in the fact that duplicate data is not cached by the cache cluster because whenever data is requested by any client, it is first looked up in the unified logical cache and if available the data that reaches the client can be from the physical cache memory of any of the servers in the cluster.

Appfabric Velocity Caching Architecture Design Overview Diagram

Velocity caching is implemented in a synchronized tier and the service can be accessed across the network with the cache servers in the cluster running the process (DistributedCache.exe). Even though several cache servers contribute to the shared cache memory, there is a common gateway for data to enter and exit in the form of a centralized and shared cache. The distributed cache is handled in code with the help of simple get and put methods that will retrieve, insert, update or delete data items in the unified logical cache as a whole. The programmer is not worried about the internal implementation or the physical location of cached data in the cluster.

Notable Aspects:
  • Each server in the cluster stores the session object data in its RAM and does not write it to disk for obvious reasons
  • No code change or new code has to be incorporated by the programmer and all that is needed is to change a configuration setting in order to activate velocity caching
  • Distributed caching means multiple web servers can each run an instance of the same application which helps in load balancing
  • Both web server (IIS) and cache server (Appfabric) can be deployed on a single application server running on Windows Server OS
  • Turning on the high-availability feature helps mitigate situations when a server in the cluster goes down by storing a synchronous secondary copy of the data item on a different server
One of the key challenges for the implementation of velocity caching involves managing parallel access to data that changes rapidly and velocity caching offers different concurrency models suitable for the concerned application. Controlling access to sensitive data among the clients is another challenge which is controlled by limiting access to such data with encryption or restricted accounts. Appfabric caching attempts to maximize the benefits from the underlying computer infrastructure with the help of a distributed yet shared architecture to provide value for both application developers and end users. Things can only get better from here.

How to transfer data from one table to another table?

In software engineering profession everyday new challenges and problems turn up in the blink of an eye and as engineers we are obliged to resolve them ASAP. One of the biggest challenges in the IT industry today is handling data that is vast and increasing everyday. Data is normally stored in databases in the form of tables and a common situation while handling databases is to transfer data from one table to another.

Case I: If both tables reside in the same database server

Consider the case where you want copy the data from Table_1 under Schema_1 into Table_2 under Schema_2. A simple sql command to do this operation would be

INSERT INTO DB_Name.Schema_2.Table_2
SELECT * FROM DB_Name.Schema_1.Table_1 WHERE [condition]

If you want to insert only certain columns you can slightly tweak the command accordingly

INSERT INTO DB_Name.Schema_2.Table_2(COL1, COL2, ....COLn)
SELECT COL1, COL2, ....COLn FROM DB_Name.Schema_1.Table_1 WHERE [condition]

Case II: Load a table from a file

Consider the situation where you want to load a table from a delimited file. This can be done using the BULK INSERT command in Transact-SQL. In order to perform this operation you need access to the file present in the database server. CSV (Comma-seperated values) is a common format used to store the data from a table or spreadsheet. To load a database table from a .csv file use the following command

BULK INSERT Table_Name
FROM 'C:\Program Files\File.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)

Bulk insert command provides a slew of options to load a table from a simple delimited text file or even from an xml file.

Case III: If both tables reside in different database servers

The bcp Utility is an excellent command line tool to transfer table data between two different database servers swiftly across a network. You can use bcp to extract table data by querying the table and store it in a delimited file. You can then load the other table from this delimited file easily using a simple bcp command. You don't even have to login to the servers to access the files.

bcp DB1.SCHEMA1.TABLE1 out FILEPATH -n -S[server_name\instance_name] -T -e[error_filepath]
bcp DB2.SCHEMA2.TABLE2 in FILEPATH -n -S[server_name\instance_name] -T -e[error_filepath]

-n uses the native (database) data types
-S should be added before server name
-T using a trusted connection
-e should be added before filepath for error file that logs the failed rows

The first command extracts the rows from TABLE1 and stores in the appropriate FILEPATH (eg. C:\Program Files\File.csv) in the database server. The second command loads TABLE2 with the data present in the File.csv. You have to run the first command to create the file first and then run the second command to load the data into the table. Instead of selecting the entire table you can also include your own select query within the bcp command. BCP offers a legion of options to transfer data between tables in different modes and formats.

Alternately you can also make use of Data Transformation Services (DTS)/ SQL Server Integration Services (SSIS) package in order to consolidate data from heterogeneous data sources.

Useful Resources:

bcp Utility - http://msdn.microsoft.com/en-us/library/ms162802.aspx
bulk insert - http://msdn.microsoft.com/en-us/library/ms188365.aspx