Copyright © Intel Corporation. All Rights Reserved.
Selecting one of the following will take you directly to that section:
Enables optimizations for speed and disables some optimizations that
increase code size and affect speed.
To limit code size, this option:
- Enables global optimization; this includes data-flow analysis,code motion, strength reduction and test replacement, split-lifetime analysis, and instruction scheduling.
- Disables intrinsic recognition and intrinsics inlining.
The O1 option may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.
On IA-32 Windows platforms, -O1 sets the following:
/Qunroll0, /Oi-, /Op-, /Oy, /Gy, /Os, /GF (/Qvc7 and above), /Gf (/Qvc6 and below), /Ob2, and /Og
Enables optimizations for speed. This is the generally recommended
optimization level. This option also enables:
- Inlining of intrinsics
- Intra-file interprocedural optimizations, which include:
- inlining
- constant propagation
- forward substitution
- routine attribute propagation
- variable address-taken analysis
- dead static function elimination
- removal of unreferenced variables
- The following capabilities for performance gain:
- constant propagation
- copy propagation
- dead-code elimination
- global register allocation
- global instruction scheduling and control speculation
- loop unrolling
- optimized code selection
- partial redundancy elimination
- strength reduction/induction variable simplification
- variable renaming
- exception handling optimizations
- tail recursions
- peephole optimizations
- structure assignment lowering and optimizations
- dead store elimination
On IA-32 Windows platforms, -O2 sets the following:
/Og, /Oi-, /Os, /Oy, /Ob2, /GF (/Qvc7 and above), /Gf (/Qvc6 and below), /Gs, and /Gy.
Enables O2 optimizations plus more aggressive optimizations,
such as prefetching, scalar replacement, and loop and memory
access transformations. Enables optimizations for maximum speed,
such as:
- Loop unrolling, including instruction scheduling
- Code replication to eliminate branches
- Padding the size of certain power-of-two arrays to allow
more efficient cache use.
On IA-32 and Intel EM64T processors, when O3 is used with options
-ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler
performs more aggressive data dependency analysis than for O2, which
may result in longer compilation times.
The O3 optimizations may not cause higher performance unless loop and
memory access transformations take place. The optimizations may slow
down code in some cases compared to O2 optimizations.
The O3 option is recommended for applications that have loops that heavily
use floating-point calculations and process large data sets. On IA-32
Windows platforms, -O3 sets the following:
/GF (/Qvc7 and above), /Gf (/Qvc6 and below), and /Ob2
Sets certain aggressive options to improve the speed of your application.
Disables inline expansion of all intrinsic functions.
Disables conformance to the ANSI C and IEEE 754 standards for floating-point arithmetic.
Allows use of EBP as a general-purpose register in optimizations.
This option enables most speed optimizations, but disables some that increase code size for a small speed benefit.
This option enables global optimizations.
Specifies the level of inline function expansion.
Ob0 - Disables inlining of user-defined functions. Note that statement functions are always inlined.
Ob1 - Enables inlining when an inline keyword or an inline attribute is specified. Also enables inlining according to the C++ language.
Ob2 - Enables inlining of any function at the compiler's discretion.
This option tells the compiler to separate functions into COMDATs for the linker.
This option enables read only string-pooling optimization.
This option enables read/write string-pooling optimization.
This option disables stack-checking for routines with 4096 bytes of local variables and compiler temporaries.
Tells the compiler the maximum number of times to unroll loops.
This option enables additional interprocedural optimizations for single file compilation. These optimizations are a subset of full intra-file interprocedural optimizations. One of these optimizations enables the compiler to perform inline function expansion for calls to functions defined within the current source file.
-ipo[n]
Multi-file ip optimizations that includes:
- inline function expansion
- interprocedural constant propogation
- dead code elimination
- propagation of function characteristics
- passing arguments in registers
- loop-invariant code motion
(n - number of multi-file objects)
This option instructs the compiler to analyze and transform the program so that 64-bit pointers are shrunk to 32-bit pointers, and 64-bit longs (on Linux) are shrunk into 32-bit longs wherever it is legal and safe to do so. In order for this option to be effective the compiler must be able to optimize using the -ipo/-Qipo option and must be able to analyze all library/external calls the program makes.
This option requires that the size of the program executable never exceeds 2^32 bytes and all data values can be represented within 32 bits. If the program can run correctly in a 32-bit system, these requirements are implicitly satisfied. If the program violates these size restrictions, unpredictable behavior might occur.
-scalar-rep enables scalar replacement performed during loop transformation. To use this option, you must also specify O3. -scalar-rep- disables this optimization.
This options tells the compiler to assume no aliasing in the program.
enable
[no-]except - enable/disable floating point semantics
fast[=1|2] - enables more aggressive floating point optimizations
precise - allows value-safe optimizations
source - enables intermediates in source precision
strict - enables -fp-model precise -fp-model except, disables
contractions and enables pragma stdc fenv_access
double - rounds intermediates in 53-bit (double) precision
extended - rounds intermediates in 64-bit (extended) precision
specify how data items are aligned
keywords: all (same as -align), none (same as -noalign),
[no]commons, [no]dcommons,
[no]qcommons, [no]zcommons,
rec1byte, rec2byte, rec4byte,
rec8byte, rec16byte, rec32byte,
array8byte, array16byte, array32byte,
array64byte, array128byte, array256byte,
[no]records, [no]sequence
The -fast option enhances execution speed across the entire program by including the following options that can improve run-time performance:
-O3 (maximum speed and high-level optimizations)
-ipo (enables interprocedural optimizations across files)
-xT (generate code specialized for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Quad processors and Intel(R) Xeon(R) processors with SSSE3)
-static (disable -prec-div) Statically link in libraries at link time
-no-prec-div (disable -prec-div) where -prec-div improves precision of FP divides (some speed impact)
To override one of the options set by /fast, specify that option after the -fast option on the command line. The exception is the xT or QxT option which can't be overridden. The options set by /fast may change from release to release.
Compiler option to statically link in libraries at link time
Link Intel provided libraries statically
Link Intel provided libraries dynamically
Generate instructions for the highest instruction set and processor available on the compilation host machine.
Code is optimized for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Quad processors and Intel(R) Xeon(R) processors with SSSE3. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for AVX instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for AVX instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for AVX2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
May generate Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) Foundation instructions, Intel(R) AVX512 Conflict Detection instructions, as well as the instructions enabled with CORE-AVX2. Optimizes for Intel(R) processors that support Intel(R) AVX-512 instructions.
Code is optimized for Intel(R) processors with support for SSE 4.2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSE 4.1 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSSE3 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium M and compatible Intel processors. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium 4 and compatible Intel processors; this is the default for Intel?EM64T systems. The resulting code may contain unconditional use of features that are not supported on other processors.
Tells the auto-parallelizer to generate multithreaded code for loops that can be safely executed in parallel. To use this option, you must also specify option O2 or O3. The default numbers of threads spawned is equal to the number of processors detected in the system where the binary is compiled. Can be changed by setting the environment variable OMP_NUM_THREADS
The use of -Qparallel to generate auto-parallelized code requires spport libraries that are dynamically linked by default. Specifying libguide.lib on the link line, statically links in libguide.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
The use of -Qparallel to generate auto-parallelized code requires spport libraries that are dynamically linked by default. Specifying libguide40.lib on the link line, statically links in libguide40.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
Optimizes for Intel Pentium 4 and compatible processors with Streaming SIMD Extensions 2 (SSE2).
Specifies whether streaming stores are generated:
always - enables generation of streaming stores under the assumption that the application is memory bound
auto - compiler decides when streaming stores are used (DEFAULT)
never - disables generation of streaming stores
Determines whethre the compiler assumes that there are no "large" integers being used or being computed inside loops.
-qopt-zmm-usage=keywoard Specifies the level of zmm register usage. You can specify one of the following:
low - Tells the compiler that the compiled program is unlikely to benefit from zmm register usage. It specifies that the compiler should avoid using zmm register unless it can prove the gain from their usage.
high - Tells the compiler to generate zmm code without restrictions
Is the level of software prefetching optimization desired. Possible values are:
0 - Disable software prefetching.
1 to 5 - Enable different level of software prefetching. If you do not specify a valune tfor n, default is 2. Use lower values to reduce the amount prefetching.
Specify malloc configuration parameters. Specifying a non-zero value will cause alternate configuration parameters to be set for how malloc allocates and frees memory.
OpenMP* SIMD compilation is enabled if option O2 or higher is in effect. OpenMP* SIMD compilation is always disabled at optimization levels of O1 or lower. When option O2 or higher is in effect, OpenMP SIMD compilation can only be disabled by specifying option -qno-openmp-simd or /Qopenmp-simd-. It is not disabled by specifying option -qno-openmp or /Qopenmp-.
-prec-div improves precision of floating-point divides. It has a slight impact on speed. -no-prec-div disables this option and enables optimizations that give slightly less precise results than full IEEE division.
-prec-sqrt improves precision of floating-point square root. It has a slight impact on speed. -no-prec-sqrt disables this option and enables optimizations that give slightly less precise results than full IEEE division.
Instrument program for profiling for the first phase of two-phase profile guided otimization. This instrumentation gathers information about a program's execution paths and data values but does not gather information from hardware performance counters. The profile instrumentation also gathers data for optimizations which are unique to profile-feedback optimization.
Instructs the compiler to produce a profile-optimized
executable and merges available dynamic information (.dyn)
files into a pgopti.dpi file. If you perform multiple
executions of the instrumented program, -prof-use merges
the dynamic information files again and overwrites the
previous pgopti.dpi file.
Without any other options, the current directory is
searched for .dyn files
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Enable SmartHeap library usage by forcing the linker to ignore multiple definitions
set the stack reserve amount specified to the linker
Enable/disable(DEFAULT) use of ANSI aliasing rules in optimizations; user asserts that the program adheres to these rules.
Enable/disable(DEFAULT) use of ANSI aliasing rules in optimizations; user asserts that the program adheres to these rules.
Enable/disable(DEFAULT) the compiler to generate prefetch instructions to prefetch data.
Directs the compiler to inline calloc() calls as malloc()/memset()
Specify malloc configuration parameters. Specifying a non-zero value will cause alternate configuration parameters to be set for how malloc allocates and frees memory
Enable/disable(DEFAULT) calls to fast calloc function
Enables cache/bandwidth optimization for stores under conditionals (within vector loops)
Enable compiler to generate runtime control code for effective automatic parallelization
Select the method that the register allocator uses to partition each routine into regions routine - one region per routine block - one region per block trace - one region per trace loop - one region per loop default - compiler selects best option
Select the method that the register allocator uses to partition each routine into regions routine - one region per routine block - one region per block trace - one region per trace loop - one region per loop default - compiler selects best option
Enables more aggressive multi-versioning
Enable the compiler to generate multi-threaded code based on the OpenMP* directives
Enable the compiler to generate multi-threaded code based on the OpenMP* directives(New option.)
Enables or disables OpenMP* SIMD compilation. You can use this option if you want to enable or disable the SIMD support with no impact on other OpenMP features. In this case, no OpenMP runtime library is needed to link and the compiler does not need to generate OpenMP runtime initialization code.
Enables recognition of OpenMP* features and tells the parallelizer to generate multi-threaded code based on OpenMP* directives.
Emit OpenMP code only for SIMD-based constructs. Advanced users who prefer to use OpenMP* as it is implemented by the LLVM community can get most of that functionality by using -fopenmp-simd.
Enables recognition of OpenMP* features, such as parallel, simd, and offloading directives. This is an alternate option for compiler option [Q or q]openmp.
Enables OpenMP* offloading compilation for target pragmas. This option only applies to Intel(R) Graphics Technology. Enabled by default with -qopenmp. Use -qno-openmp-offload to disable. Specify kind to specify the default device for target pragmas host - allow target code to run on host system while still doing the outlining for offload mic - specify Intel(R) MIC Architecture gfx - Specify Intel(R) Graphics Technology
Make all local variables AUTOMATIC. Same as -automatic
Enables more aggressive unrolling heuristics
Specifies which implementation to use. Possible values are:
new
Enables the new implementation of parallel-loop support. As a result, parallel C++ range-based loops and collapsing complex loop stacks will not result in compilation errors. This is the default.
old
Enables the old implementation of parallel-loop support. This is the same implementation that was supported in 18.0 and earlier releases.
default>
This is the same as specifying new.
Indicates to the compiler what action to take. Possible values are:
keep
Tells the compiler to not attempt any vulnerable code detection or fixing. This is equivalent to not specifying the -mconditional-branch option.
pattern-report
Tells the compiler to perform a search of vulnerable code patterns in the compilation and report all occurrences to stderr.
pattern-fix
Tells the compiler to perform a search of vulnerable code patterns in the compilation and generate code to ensure that the identified data accesses are not executed speculatively. It will also report any fixed patterns to stderr.
This setting does not guarantee total mitigation, it only fixes cases where all components of the vulnerability can be seen or determined by the compiler. The pattern detection will be more complete if advanced optimization options are specified or are in effect, such as option O3 and option -ipo (or /Qipo).
all-fix
Tells the compiler to fix all of the vulnerable code so that it is either not executed speculatively, or there is no observable side-channel created from their speculative execution. Since it is a complete mitigation against Spectre variant 1 attacks, this setting will have the most run-time performance cost.
In contrast to the pattern-fix setting, the compiler will not attempt to identify the exact conditional branches that may have led to the mis-speculated execution.
all-fix-lfence
This is the same as specifying setting all-fix.
all-fix-cmov
Tells the compiler to treat any path where speculative execution of a memory load creates vulnerability (if mispredicted). The compiler automatically adds mitigation code along any vulnerable paths found, but it uses a different method then the one used for all-fix (or all-fix-lfence).
This method uses CMOVcc instruction execution, which constrains speculative execution. Thus, it is used for keeping track of the predicate value, which is updated on each conditional branch.
To prevent Spectre v.1 attack, each memory load that is potentially vulnerable is bitwise ORed with the predicate to mask out the loaded value if the code is on a mispredicted path.
This is analogous to the Clang compiler's option to do Speculative Load Hardening.
This setting is only supported on Intel� 64 architecture-based systems.
Determines whether the compiler optimizes tail recursive calls. This feature is only available for ifort. This option determines whether the compiler optimizes tail recursive calls. It enables conversion of tail recursion into loops.
Determines whether calls to routines are optimized by passing arguments in registers instead of on the stack. This option is deprecated and will be removed in a future release. This feature is only available for ifort. This option determines whether calls to routines are optimized by passing arguments in registers instead of on the stack. It also indicates the conditions when the optimization will be performed. This option can improve performance for Application Binary Interfaces (ABIs) that require arguments to be passed in memory and compiled without interprocedural optimization (IPO). Note that on Linux* systems, if all is specified, a small overhead may be paid when calling "unseen" routines that have not been compiled with the same option. This is because the call will need to go through a "thunk" to ensure that arguments are placed back on the stack where the callee expects them.
Tells the compiler to generate code for Intel� 64 architecture.
Determines whether the compiler generates fused multiply-add (FMA) instructions if such instructions exist on the target processor. This option determines whether the compiler generates fused multiply-add (FMA) instructions if such instructions exist on the target processor. When the [Q]fma option is specified, the compiler may generate FMA instructions for combining multiply and add operations. When the negative form of the [Q]fma option is specified, the compiler must generate separate multiply and add instructions with intermediate rounding. This option has no effect unless setting CORE-AVX2 or higher is specified for option [Q]x,-march (Linux and macOS*), or /arch (Windows).
amberlake
broadwell
cannonlake
cascadelake
coffeelake
goldmont
goldmont-plus
haswell
icelake-client (or icelake)
icelake-server
ivybridge
kabylake
knl
knm
sandybridge
silvermont
skylake
skylake-avx512
tremont
whiskeylake
core-avx2 - Generates code for processors that support Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® AVX, SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
core-avx-i - Generates code for processors that support Float-16 conversion instructions and the RDRND instruction, Intel® Advanced Vector Extensions (Intel® AVX), Intel® SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
corei7-avx - Generates code for processors that support Intel® Advanced Vector Extensions (Intel® AVX), Intel® SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
corei7 - Generates code for processors that support Intel® SSE4 Efficient Accelerated String and Text Processing instructions. May also generate code for Intel® SSE4 Vectorizing Compiler and Media Accelerator, Intel® SSE3, SSE2, SSE, and SSSE3 instructions.
atom - Generates code for processors that support MOVBE instructions. May also generate code for SSSE3 instructions and Intel® SSE3, SSE2, and SSE instructions.
core2 - Generates code for the Intel® Core™2 processor family.
pentium4m - Generates for Intel® Pentium® 4 processors with MMX technology.
pentium-m - Generates code for Intel® Pentium® processors. Value pentium3 is only available on Linux* systems.
pentium4
pentium3
pentium
amberlake
broadwell
cannonlake
cascadelake
coffeelake
goldmont
goldmont-plus
haswell
icelake-client (or icelake)
icelake-server
ivybridge
kabylake
knl
knm
sandybridge
silvermont
skylake
skylake-avx512
tremont
whiskeylake
core-avx2 - Generates code for processors that support Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® AVX, SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
core-avx-i - Generates code for processors that support Float-16 conversion instructions and the RDRND instruction, Intel® Advanced Vector Extensions (Intel® AVX), Intel® SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
corei7-avx - Generates code for processors that support Intel® Advanced Vector Extensions (Intel® AVX), Intel® SSE4.2, SSE4.1, SSE3, SSE2, SSE, and SSSE3 instructions.
corei7 - Generates code for processors that support Intel® SSE4 Efficient Accelerated String and Text Processing instructions. May also generate code for Intel® SSE4 Vectorizing Compiler and Media Accelerator, Intel® SSE3, SSE2, SSE, and SSSE3 instructions.
atom - Generates code for processors that support MOVBE instructions. May also generate code for SSSE3 instructions and Intel® SSE3, SSE2, and SSE instructions.
core2 - Generates code for the Intel® Core™2 processor family.
pentium4m - Generates for Intel® Pentium® 4 processors with MMX technology.
pentium-m - Generates code for Intel® Pentium® processors. Value pentium3 is only available on Linux* systems.
pentium4
pentium3
pentium
Enables or disables the optimization for multiple adjacent gather/scatter type vector memory references. This content is specific to C++; it does not apply to DPC++. This option controls the optimization for multiple adjacent gather/scatter type vector memory references. This optimization hint is useful for performance tuning. It tries to generate more optimal software sequences using shuffles. If you specify this option, the compiler will apply the optimization heuristics. If you specify -qno-opt-multiple-gather-scatter-by-shuffles or /Qopt-multiple-gather-scatter-by-shuffles-, the compiler will not apply the optimization.
Allow aggressive, lossy floating-point optimizations.
Enable optimizations based on the strict definition of an enum's value range.
Enable optimizations based on the strict rules for overwriting polymorphic C++ objects.
Enables dead virtual function elimination optimization. Requires -flto=full.
Allow optimizations for floating point arithmetic that assume arguments and results are not NaNs or Infinities.
Creates multiple processes that can be used to compile large numbers of source files at the same time. n is the maximum number of processes that the compiler should create.
compile all procedures for possible recursive execution.
Allow optimizations that ignore the sign of floating point zeros
Enables or disables vectorization. To disable vectorization, specify -no-vec (Linux* and macOS) or /Qvec- (Windows*). To disable interpretation of SIMD directives, specify -no-simd (Linux* and macOS) or /Qsimd- (Windows*). To disable all compiler vectorization, use the "-no-vec -no-simd" (Linux* and macOS) or "/Qvec- /Qsimd-" (Windows*) compiler options. The option -no-vec (and /Qvec-) disables all auto-vectorization, including vectorization of array notation statements. The option -no-simd (and /Qsimd-) disables vectorization of loops that have SIMD directives.
Enables or disables vectorization. To disable vectorization, specify -no-vec (Linux* and macOS) or /Qvec- (Windows*). To disable interpretation of SIMD directives, specify -no-simd (Linux* and macOS) or /Qsimd- (Windows*). To disable all compiler vectorization, use the "-no-vec -no-simd" (Linux* and macOS) or "/Qvec- /Qsimd-" (Windows*) compiler options. The option -no-vec (and /Qvec-) disables all auto-vectorization, including vectorization of array notation statements. The option -no-simd (and /Qsimd-) disables vectorization of loops that have SIMD directives.
Enables a program to be compiled as a SYCL* program rather than as plain C++11 program.
Enables elimination of DPC++ dead kernel arguments
Enables LLVM-related optimizations before SPIR-V* generation.
Determines whether EBP is used as a general-purpose register in optimizations.
Determines whether EBP is used as a general-purpose register in optimizations.
This option controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
n
Is the level of memory layout transformations. Possible values are:
0
Disables memory layout transformations. This is the same as specifying -qno-opt-mem-layout-trans (Linux* or macOS) or /Qopt-mem-layout-trans- (Windows*).
1
Enables basic memory layout transformations.
2
Enables more memory layout transformations. This is the same as specifying [q or Q]opt-mem-layout-trans with no argument.
3
Enables more memory layout transformations like copy-in/copy-out of structures for a region of code. You should only use this setting if your system has more than 4GB of physical memory per core.
4
Enables more aggressive memory layout transformations. You should only use this setting if your system has more than 4GB of physical memory per core.
Define the MPICH_IGNORE_CXX_SEEK macro at compilation stage to catastrophic error: "SEEK_SET is #defined but must not be for the C++ binding of MPI" when compiling C++ MPI application.
For mixed-language benchmarks, tell the compiler to convert routine names to lowercase for compatibility
For mixed-language benchmarks, tell the compiler to assume that routine names end with an underscore
Tell the compiler to treat source files as C++ regardless of the file extension
specify source files are in free format. Same as -FR. -nofree indicates fixed format
specify source files are in fixed format. Same as -FI. -nofixed indicates free format
This option specifies that the main program is not written in Fortran. It is a link-time option that prevents the compiler from linking for_main.o into applications.
For example, if the main program is written in C and calls a Fortran subprogram, specify -nofor-main when compiling the program with the ifort command. If you omit this option, the main program must be a Fortran program.
-mcmodel=<size>
use a specific memory model to generate code and store data
small - Restricts code and data to the first 2GB of address space (DEFAULT)
medium - Restricts code to the first 2GB; it places no memory restriction on data
large - Places no memory restriction on code or data
enable language support for
c99 enable C99 support for C programs
c++11 enable C++11 experimental support for C++ programs
c++0x same as c++11
Invoke the Intel C/C++ compiler for Intel 64 applications
Invoke the Intel C/C++ compiler for 32-bit applications
Invoke the Intel C compiler for IA32 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel C++ compiler for IA32 and Intel 64 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel Fortran compiler Classic for IA32 and Intel 64 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel oneAPI DPC++/C++ compiler and runtime environment.
The Intel� oneAPI DPC++/C++ Compiler can be found in the Intel� oneAPI Base Toolkit, Intel� oneAPI HPC Toolkit, Intel� oneAPI IoT Toolkit, or as a standalone compiler. More information and specifications can be found on the Intel� oneAPI DPC++/C++ Compiler main page.
Invoke the Intel oneAPI DPC++/C++ compiler and runtime environment.
The Intel� oneAPI DPC++/C++ Compiler can be found in the Intel� oneAPI Base Toolkit, Intel� oneAPI HPC Toolkit, Intel� oneAPI IoT Toolkit, or as a standalone compiler. More information and specifications can be found on the Intel� oneAPI DPC++/C++ Compiler main page.
Invoke the Intel Fortran Compiler (Beta), it a new compiler based on the Intel Fortran Compiler Classic (ifort) frontend and runtime libraries, using LLVM backend technology.
ifx does not support 32-bit target.
The Intel� Fortran Compiler (Beta) (ifx) can be found in the Intel� oneAPI Base Toolkit, Intel� oneAPI HPC Toolkit, Intel� oneAPI IoT Toolkit, or as a standalone compiler. For more information, see Introducing the Intel� Fortran Compiler Classic and Intel� Fortran Compiler (Beta).
Compiler option to set the path for include files. Used in some integer peak benchmarks which were built using the Intel 64-bit C++ compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 64-bit C++ compiler.
Compiler option to set the path for include files. Used in some peak benchmarks which were built using the Intel 32-bit C++ compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 32-bit C++ compiler.
Compiler option to set the path for include files. Used in some peak benchmarks which were built using the Intel 32-bit Fortran compiler.
Compiler option to set the path for library files. Used in some integer peak benchmarks which were built using the Intel 32-bit Fortran compiler.
Defines a macro
KMP_AFFINITY
The KMP_AFFINITY environment variable uses the following general syntax:
Syntax |
---|
KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>] |
For example, to list a machine topology map, specify KMP_AFFINITY=verbose,none to use a modifier of verbose and a type of none.
The following table describes the supported specific arguments.
Argument |
Default |
Description |
---|---|---|
noverbose respect granularity=core |
Optional. String consisting of keyword and specifier.
|
|
none |
Required string. Indicates the thread affinity to use.
The logical and physical types are deprecated but supported for backward compatibility. |
|
0 |
Optional. Positive integer value. Not valid with type values of explicit, none, or disabled. | |
0 |
Optional. Positive integer value. Not valid with type values of explicit, none, or disabled. |
Type is the only required argument.
Does not bind OpenMP threads to particular thread contexts; however, if the operating system supports affinity, the compiler still uses the OpenMP thread affinity interface to determine machine topology. Specify KMP_AFFINITY=verbose,none to list a machine topology map.
Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed. For example, in a topology map, the nearer a node is to the root, the more significance the node has when sorting the threads.
Specifying disabled completely disables the thread affinity interfaces. This forces the OpenMP run-time library to behave as if the affinity interface was not supported by the operating system. This includes the low-level API interfaces such as kmp_set_affinity and kmp_get_affinity, which have no effect and will return a nonzero error code.
Specifying explicit assigns OpenMP threads to a list of OS proc IDs that have been explicitly specified by using the proclist= modifier, which is required for this affinity type.
Specifying scatter distributes the threads as evenly as possible across the entire system. scatter is the opposite of compact; so the leaves of the node are most significant when sorting through the machine topology map.
Types logical and physical are deprecated and may become unsupported in a future release. Both are supported for backward compatibility.
For logical and physical affinity types, a single trailing integer is interpreted as an offset specifier instead of a permute specifier. In contrast, with compact and scatter types, a single trailing integer is interpreted as a permute specifier.
Specifying logical assigns OpenMP threads to consecutive logical processors, which are also called hardware thread contexts. The type is equivalent to compact, except that the permute specifier is not allowed. Thus, KMP_AFFINITY=logical,n is equivalent to KMP_AFFINITY=compact,0,n (this equivalence is true regardless of the whether or not a granularity=fine modifier is present).
For both compact and scatter, permute and offset are allowed; however, if you specify only one integer, the compiler interprets the value as a permute specifier. Both permute and offset default to 0.
The permute specifier controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance. The root node of the tree is not considered a separate level for the sort operations.
The offset specifier indicates the starting position for thread assignment.
Modifiers are optional arguments that precede type. If you do not specify a modifier, the noverbose, respect, and granularity=core modifiers are used automatically.
Modifiers are interpreted in order from left to right, and can negate each other. For example, specifying KMP_AFFINITY=verbose,noverbose,scatter is therefore equivalent to setting KMP_AFFINITY=noverbose,scatter, or just KMP_AFFINITY=scatter.
Does not print verbose messages.
Prints messages concerning the supported affinity. The messages include information about the number of packages, number of cores in each package, number of thread contexts for each core, and OpenMP thread bindings to physical thread contexts.
Information about binding OpenMP threads to physical thread contexts is indirectly shown in the form of the mappings between hardware thread contexts and the operating system (OS) processor (proc) IDs. The affinity mask for each OpenMP thread is printed as a set of OS processor IDs.
KMP_LIBRARY
KMP_LIBRARY = { throughput | turnaround | serial }, Selects the OpenMP run-time library execution mode. The options for the variable value are throughput, turnaround, and serial.
The compiler with OpenMP enables you to run an application under different execution modes that can be specified at run time. The libraries support the serial, turnaround, and throughput modes.
The serial mode forces parallel applications to run on a single processor.
In a dedicated (batch or single user) parallel environment where all processors are exclusively allocated to the program for its entire run, it is most important to effectively utilize all of the processors all of the time. The turnaround mode is designed to keep active all of the processors involved in the parallel computation in order to minimize the execution time of a single job. In this mode, the worker threads actively wait for more parallel work, without yielding to other threads.
Avoid over-allocating system resources. This occurs if either too many threads have been specified, or if too few processors are available at run time. If system resources are over-allocated, this mode will cause poor performance. The throughput mode should be used instead if this occurs.
In a multi-user environment where the load on the parallel machine is not constant or where the job stream is not predictable, it may be better to design and tune for throughput. This minimizes the total time to run multiple jobs simultaneously. In this mode, the worker threads will yield to other threads while waiting for more parallel work.
The throughput mode is designed to make the program aware of its environment (that is, the system load) and to adjust its resource usage to produce efficient execution in a dynamic environment. This mode is the default.
KMP_BLOCKTIME
KMP_BLOCKTIME = value. Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping.Use the optional character suffixes: s (seconds), m (minutes), h (hours), or d (days) to specify the units.Specify infinite for an unlimited wait time.
KMP_STACKSIZE
KMP_STACKSIZE = value. Sets the number of bytes to allocate for each OpenMP* thread to use as the private stack for the thread. Recommended size is 16m. Use the optional suffixes: b (bytes), k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes) to specify the units. This variable does not affect the native operating system threads created by the user program nor the thread executing the sequential part of an OpenMP* program or parallel programs created using -parallel.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -openmp and -parallel. Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
OMP_DYNAMIC
OMP_DYNAMIC={ 1 | 0 } Enables (1, true) or disables (0,false) the dynamic adjustment of the number of threads.
OMP_SCHEDULE
OMP_SCHEDULE={ type,[chunk size]} Controls the scheduling of the for-loop work-sharing construct. type can be either of static,dynamic,guided,runtime chunk size should be positive integer
OMP_NESTED
OMP_NESTED={ 1 | 0 } Enables creation of new teams in case of nested parallel regions (1,true) or serializes (0,false) all nested parallel regions. Default is 0.