Copyright © 2012 Intel Corporation. All Rights Reserved.
Selecting one of the following will take you directly to that section:
Enables optimizations for speed and disables some optimizations that
increase code size and affect speed.
To limit code size, this option:
- Enables global optimization; this includes data-flow analysis,
code motion, strength reduction and test replacement, split-lifetime
analysis, and instruction scheduling.
- Disables intrinsic recognition and intrinsics inlining.
The O1 option may improve performance for applications with very large
code size, many branches, and execution time not dominated by code within loops.
On IA-32 Windows platforms, -O1 sets the following:
/Qunroll0, /Oi-, /Op-, /Oy, /Gy, /Os, /GF (/Qvc7 and above), /Gf (/Qvc6 and below), /Ob2, and /Og
Enables optimizations for speed. This is the generally recommended
optimization level. This option also enables:
- Inlining of intrinsics
- Intra-file interprocedural optimizations, which include:
- inlining
- constant propagation
- forward substitution
- routine attribute propagation
- variable address-taken analysis
- dead static function elimination
- removal of unreferenced variables
- The following capabilities for performance gain:
- constant propagation
- copy propagation
- dead-code elimination
- global register allocation
- global instruction scheduling and control speculation
- loop unrolling
- optimized code selection
- partial redundancy elimination
- strength reduction/induction variable simplification
- variable renaming
- exception handling optimizations
- tail recursions
- peephole optimizations
- structure assignment lowering and optimizations
- dead store elimination
On IA-32 Windows platforms, -O2 sets the following:
/Og, /Oi-, /Os, /Oy, /Ob2, /GF (/Qvc7 and above), /Gf (/Qvc6 and below), /Gs, and /Gy.
Enables O2 optimizations plus more aggressive optimizations,
such as prefetching, scalar replacement, and loop and memory
access transformations. Enables optimizations for maximum speed,
such as:
- Loop unrolling, including instruction scheduling
- Code replication to eliminate branches
- Padding the size of certain power-of-two arrays to allow
more efficient cache use.
On IA-32 and Intel EM64T processors, when O3 is used with options
-ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler
performs more aggressive data dependency analysis than for O2, which
may result in longer compilation times.
The O3 optimizations may not cause higher performance unless loop and
memory access transformations take place. The optimizations may slow
down code in some cases compared to O2 optimizations.
The O3 option is recommended for applications that have loops that heavily
use floating-point calculations and process large data sets. On IA-32
Windows platforms, -O3 sets the following:
/GF (/Qvc7 and above), /Gf (/Qvc6 and below), and /Ob2
Tells the compiler the maximum number of times to unroll loops.
This option enables additional interprocedural optimizations for single file compilation. These optimizations are a subset of full intra-file interprocedural optimizations. One of these optimizations enables the compiler to perform inline function expansion for calls to functions defined within the current source file.
-ipo[n]
Multi-file ip optimizations that includes:
- inline function expansion
- interprocedural constant propogation
- dead code elimination
- propagation of function characteristics
- passing arguments in registers
- loop-invariant code motion
(n - number of multi-file objects)
This option instructs the compiler to analyze and transform the program so that 64-bit pointers are shrunk to 32-bit pointers, and 64-bit longs (on Linux) are shrunk into 32-bit longs wherever it is legal and safe to do so. In order for this option to be effective the compiler must be able to optimize using the -ipo/-Qipo option and must be able to analyze all library/external calls the program makes.
This option requires that the size of the program executable never exceeds 2^32 bytes and all data values can be represented within 32 bits. If the program can run correctly in a 32-bit system, these requirements are implicitly satisfied. If the program violates these size restrictions, unpredictable behavior might occur.
-scalar-rep enables scalar replacement performed during loop transformation. To use this option, you must also specify O3. -scalar-rep- disables this optimization.
This options tells the compiler to assume no aliasing in the program.
enable
[no-]except - enable/disable floating point semantics
fast[=1|2] - enables more aggressive floating point optimizations
precise - allows value-safe optimizations
source - enables intermediates in source precision
strict - enables -fp-model precise -fp-model except, disables
contractions and enables pragma stdc fenv_access
double - rounds intermediates in 53-bit (double) precision
extended - rounds intermediates in 64-bit (extended) precision
The -fast option enhances execution speed across the entire program by including the following options that can improve run-time performance:
-O3 (maximum speed and high-level optimizations)
-ipo (enables interprocedural optimizations across files)
-xT (generate code specialized for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Quad processors and Intel(R) Xeon(R) processors with SSSE3)
-static (disable -prec-div) Statically link in libraries at link time
-no-prec-div (disable -prec-div) where -prec-div improves precision of FP divides (some speed impact)
To override one of the options set by /fast, specify that option after the -fast option on the command line. The exception is the xT or QxT option which can't be overridden. The options set by /fast may change from release to release.
Compiler option to statically link in libraries at link time
Link Intel provided libraries statically
Link Intel provided libraries dynamically
Generate instructions for the highest instruction set and processor available on the compilation host machine.
Code is optimized for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Quad processors and Intel(R) Xeon(R) processors with SSSE3. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for AVX instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSE 4.2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSE 4.1 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSSE3 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium M and compatible Intel processors. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium 4 and compatible Intel processors; this is the default for Intel?EM64T systems. The resulting code may contain unconditional use of features that are not supported on other processors.
Tells the auto-parallelizer to generate multithreaded code for loops that can be safely executed in parallel. To use this option, you must also specify option O2 or O3. The default numbers of threads spawned is equal to the number of processors detected in the system where the binary is compiled. Can be changed by setting the environment variable OMP_NUM_THREADS
The use of -Qparallel to generate auto-parallelized code requires spport libraries that are dynamically linked by default. Specifying libguide.lib on the link line, statically links in libguide.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
The use of -Qparallel to generate auto-parallelized code requires spport libraries that are dynamically linked by default. Specifying libguide40.lib on the link line, statically links in libguide40.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
Optimizes for Intel Pentium 4 and compatible processors with Streaming SIMD Extensions 2 (SSE2).
-prec-div improves precision of floating-point divides. It has a slight impact on speed. -no-prec-div disables this option and enables optimizations that give slightly less precise results than full IEEE division.
-prec-sqrt improves precision of floating-point square root. It has a slight impact on speed. -no-prec-sqrt disables this option and enables optimizations that give slightly less precise results than full IEEE division.
Instrument program for profiling for the first phase of two-phase profile guided otimization. This instrumentation gathers information about a program's execution paths and data values but does not gather information from hardware performance counters. The profile instrumentation also gathers data for optimizations which are unique to profile-feedback optimization.
Instructs the compiler to produce a profile-optimized
executable and merges available dynamic information (.dyn)
files into a pgopti.dpi file. If you perform multiple
executions of the instrumented program, -prof-use merges
the dynamic information files again and overwrites the
previous pgopti.dpi file.
Without any other options, the current directory is
searched for .dyn files
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Enable SmartHeap library usage by forcing the linker to ignore multiple definitions
set the stack reserve amount specified to the linker
Enable/disable(DEFAULT) use of ANSI aliasing rules in optimizations; user asserts that the program adheres to these rules.
Enable/disable(DEFAULT) use of ANSI aliasing rules in optimizations; user asserts that the program adheres to these rules.
Enable/disable(DEFAULT) the compiler to generate prefetch instructions to prefetch data.
Directs the compiler to inline calloc() calls as malloc()/memset()
Specify malloc configuration parameters. Specifying a non-zero value will cause alternate configuration parameters to be set for how malloc allocates and frees memory
Enable/disable(DEFAULT) calls to fast calloc function
Enables cache/bandwidth optimization for stores under conditionals (within vector loops)
Enable compiler to generate runtime control code for effective automatic parallelization
Select the method that the register allocator uses to partition each routine into regions routine - one region per routine block - one region per block trace - one region per trace loop - one region per loop default - compiler selects best option
Select the method that the register allocator uses to partition each routine into regions routine - one region per routine block - one region per block trace - one region per trace loop - one region per loop default - compiler selects best option
Enables more aggressive multi-versioning
Enable the compiler to generate multi-threaded code based on the OpenMP* directives
Make all local variables AUTOMATIC. Same as -automatic
Enables more aggressive unrolling heuristics
Specifies whether streaming stores are generated:
always - enables generation of streaming stores under the assumption that the application is memory bound
auto - compiler decides when streaming stores are used (DEFAULT)
never - disables generation of streaming stores
Disables inline expansion of all intrinsic functions.
Disables conformance to the ANSI C and IEEE 754 standards for floating-point arithmetic.
Allows use of EBP as a general-purpose register in optimizations.
This option enables most speed optimizations, but disables some that increase code size for a small speed benefit.
This option enables global optimizations.
Specifies the level of inline function expansion.
Ob0 - Disables inlining of user-defined functions. Note that statement functions are always inlined.
Ob1 - Enables inlining when an inline keyword or an inline attribute is specified. Also enables inlining according to the C++ language.
Ob2 - Enables inlining of any function at the compiler's discretion.
This option tells the compiler to separate functions into COMDATs for the linker.
This option enables read only string-pooling optimization.
This option enables read/write string-pooling optimization.
This option disables stack-checking for routines with 4096 bytes of local variables and compiler temporaries.
Define the MPICH_IGNORE_CXX_SEEK macro at compilation stage to catastrophic error: "SEEK_SET is #defined but must not be for the C++ binding of MPI" when compiling C++ MPI application.
For mixed-language benchmarks, tell the compiler to convert routine names to lowercase for compatibility
For mixed-language benchmarks, tell the compiler to assume that routine names end with an underscore
Tell the compiler to treat source files as C++ regardless of the file extension
specify source files are in free format. Same as -FR. -nofree indicates fixed format
specify source files are in fixed format. Same as -FI. -nofixed indicates free format
This option specifies that the main program is not written in Fortran. It is a link-time option that prevents the compiler from linking for_main.o into applications.
For example, if the main program is written in C and calls a Fortran subprogram, specify -nofor-main when compiling the program with the ifort command. If you omit this option, the main program must be a Fortran program.
-mcmodel=<size>
use a specific memory model to generate code and store data
small - Restricts code and data to the first 2GB of address space (DEFAULT)
medium - Restricts code to the first 2GB; it places no memory restriction on data
large - Places no memory restriction on code or data
enable language support for
c99 enable C99 support for C programs
c++11 enable C++11 experimental support for C++ programs
c++0x same as c++11
Invoke the Intel C/C++ compiler for Intel 64 applications
Invoke the Intel C/C++ compiler for 32-bit applications
Invoke the Intel C compiler for IA32 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel C++ compiler for IA32 and Intel 64 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel Fortran compiler for IA32 and Intel 64 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Compiler option to set the path for include files. Used in some integer peak benchmarks which were built using the Intel 64-bit C++ compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 64-bit C++ compiler.
Compiler option to set the path for include files. Used in some peak benchmarks which were built using the Intel 32-bit C++ compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 32-bit C++ compiler.
Compiler option to set the path for include files. Used in some peak benchmarks which were built using the Intel 32-bit Fortran compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 32-bit Fortran compiler.
Defines a macro
KMP_AFFINITY
The KMP_AFFINITY environment variable uses the following general syntax:
Syntax |
---|
KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>] |
For example, to list a machine topology map, specify KMP_AFFINITY=verbose,none to use a modifier of verbose and a type of none.
The following table describes the supported specific arguments.
Argument |
Default |
Description |
---|---|---|
noverbose respect granularity=core |
Optional. String consisting of keyword and specifier.
|
|
none |
Required string. Indicates the thread affinity to use.
The logical and physical types are deprecated but supported for backward compatibility. |
|
0 |
Optional. Positive integer value. Not valid with type values of explicit, none, or disabled. | |
0 |
Optional. Positive integer value. Not valid with type values of explicit, none, or disabled. |
Type is the only required argument.
Does not bind OpenMP threads to particular thread contexts; however, if the operating system supports affinity, the compiler still uses the OpenMP thread affinity interface to determine machine topology. Specify KMP_AFFINITY=verbose,none to list a machine topology map.
Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed. For example, in a topology map, the nearer a node is to the root, the more significance the node has when sorting the threads.
Specifying disabled completely disables the thread affinity interfaces. This forces the OpenMP run-time library to behave as if the affinity interface was not supported by the operating system. This includes the low-level API interfaces such as kmp_set_affinity and kmp_get_affinity, which have no effect and will return a nonzero error code.
Specifying explicit assigns OpenMP threads to a list of OS proc IDs that have been explicitly specified by using the proclist= modifier, which is required for this affinity type.
Specifying scatter distributes the threads as evenly as possible across the entire system. scatter is the opposite of compact; so the leaves of the node are most significant when sorting through the machine topology map.
Types logical and physical are deprecated and may become unsupported in a future release. Both are supported for backward compatibility.
For logical and physical affinity types, a single trailing integer is interpreted as an offset specifier instead of a permute specifier. In contrast, with compact and scatter types, a single trailing integer is interpreted as a permute specifier.
Specifying logical assigns OpenMP threads to consecutive logical processors, which are also called hardware thread contexts. The type is equivalent to compact, except that the permute specifier is not allowed. Thus, KMP_AFFINITY=logical,n is equivalent to KMP_AFFINITY=compact,0,n (this equivalence is true regardless of the whether or not a granularity=fine modifier is present).
For both compact and scatter, permute and offset are allowed; however, if you specify only one integer, the compiler interprets the value as a permute specifier. Both permute and offset default to 0.
The permute specifier controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance. The root node of the tree is not considered a separate level for the sort operations.
The offset specifier indicates the starting position for thread assignment.
Modifiers are optional arguments that precede type. If you do not specify a modifier, the noverbose, respect, and granularity=core modifiers are used automatically.
Modifiers are interpreted in order from left to right, and can negate each other. For example, specifying KMP_AFFINITY=verbose,noverbose,scatter is therefore equivalent to setting KMP_AFFINITY=noverbose,scatter, or just KMP_AFFINITY=scatter.
Does not print verbose messages.
Prints messages concerning the supported affinity. The messages include information about the number of packages, number of cores in each package, number of thread contexts for each core, and OpenMP thread bindings to physical thread contexts.
Information about binding OpenMP threads to physical thread contexts is indirectly shown in the form of the mappings between hardware thread contexts and the operating system (OS) processor (proc) IDs. The affinity mask for each OpenMP thread is printed as a set of OS processor IDs.
KMP_LIBRARY
KMP_LIBRARY = [ throughput | turnaround | serial ], Selects the OpenMP run-time library execution mode. The options for the variable value are throughput, turnaround, and serial.
The compiler with OpenMP enables you to run an application under different execution modes that can be specified at run time. The libraries support the serial, turnaround, and throughput modes.
The serial mode forces parallel applications to run on a single processor.
In a dedicated (batch or single user) parallel environment where all processors are exclusively allocated to the program for its entire run, it is most important to effectively utilize all of the processors all of the time. The turnaround mode is designed to keep active all of the processors involved in the parallel computation in order to minimize the execution time of a single job. In this mode, the worker threads actively wait for more parallel work, without yielding to other threads.
Avoid over-allocating system resources. This occurs if either too many threads have been specified, or if too few processors are available at run time. If system resources are over-allocated, this mode will cause poor performance. The throughput mode should be used instead if this occurs.
In a multi-user environment where the load on the parallel machine is not constant or where the job stream is not predictable, it may be better to design and tune for throughput. This minimizes the total time to run multiple jobs simultaneously. In this mode, the worker threads will yield to other threads while waiting for more parallel work.
The throughput mode is designed to make the program aware of its environment (that is, the system load) and to adjust its resource usage to produce efficient execution in a dynamic environment. This mode is the default.
KMP_BLOCKTIME
KMP_BLOCKTIME = value. Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping.Use the optional character suffixes: s (seconds), m (minutes), h (hours), or d (days) to specify the units.Specify infinite for an unlimited wait time.
KMP_STACKSIZE
KMP_STACKSIZE = value. Sets the number of bytes to allocate for each OpenMP* thread to use as the private stack for the thread. Recommended size is 16m. Use the optional suffixes: b (bytes), k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes) to specify the units. This variable does not affect the native operating system threads created by the user program nor the thread executing the sequential part of an OpenMP* program or parallel programs created using -parallel.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -openmp and -parallel. Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
OMP_DYNAMIC
OMP_DYNAMIC=[ 1 | 0 | true | false] Enables (1 or true) or disables (0 or false) the dynamic adjustment of the number of threads.
OMP_NESTED
OMP_DYNAMIC=[ 1 | 0 | true | false] Enables (1 or true) or disables (0 or false) the nested parallelism.