This result has been formatted using multiple flags files. The "default header section" from each of them appears next.
Note: The GNU Compiler Collection provides a wide array of compiler options, described in detail and readily available at https://gcc.gnu.org/onlinedocs/gcc/Option-Index.html#Option-Index and https://gcc.gnu.org/onlinedocs/gfortran/. This SPEC CPU flags file contains excerpts from and brief summaries of portions of that documentation.
SPEC's modifications are:
Copyright (C) 2006-2017 Standard Performance Evaluation Corporation
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "Funding Free Software", the Front-Cover Texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in your SPEC CPU kit at $SPEC/Docs/licenses/FDL.v1.3 and on the web at http://www.spec.org/cpu2017/Docs/licenses/FDL.v1.3. A copy of "Funding Free Software" is on your SPEC CPU kit at $SPEC/Docs/licenses/FundingFreeSW and on the web at http://www.spec.org/cpu2017/Docs/licenses/FundingFreeSW.
(a) The FSF's Front-Cover Text is:
A GNU Manual
(b) The FSF's Back-Cover Text is:
You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools.
PreprocessingThis stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a .i (for C), .ii (for C++), .mi (for Objective-C), or .mii (for Objective-C++) file.
This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an Abstract Syntax Tree (AST).
Code Generation and OptimizationThis stage translates an AST into low-level intermediate code (known as LLVM IR) and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a .s file or assembly file.
Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the .s file and of calling the target assembler.
AssemblerThis stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a .o file or object file.
LinkerThis stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an a.out, .dylib or .so file.
Invokes the GNU Fortran compiler.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
Use big-endian representation for unformatted files. This is important when reading 521.wrf_r, 621.wrf_s, and 628.pop2_s data files that were originally generated in big-endian format.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Let the type "char" be unsigned, like "unsigned char".
Note: this particular portability flag is included for 526.blender_r per the recommendation in its documentation - see http://www.spec.org/cpu2017/Docs/benchmarks/526.blender_r.html.
Some systems need to see alternate definitions for boolean types. This flag enables their use.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
Use big-endian representation for unformatted files. This is important when reading 521.wrf_r, 621.wrf_s, and 628.pop2_s data files that were originally generated in big-endian format.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Let the type "char" be unsigned, like "unsigned char".
Note: this particular portability flag is included for 526.blender_r per the recommendation in its documentation - see http://www.spec.org/cpu2017/Docs/benchmarks/526.blender_r.html.
Some systems need to see alternate definitions for boolean types. This flag enables their use.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
On x86 systems, allows use of instructions that require the listed architecture.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Generate code for processors that include the AVX extensions.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops. Use the option -fplugin-arg-dragonegg-llvm-option="-disable-vect-cmp" to pass this option to LLVM backend through dragonegg.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with gfortran libraries
Instructs the compiler to link with AMD-supported math library
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Generate code for processors that include the AVX extensions.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops. Use the option -fplugin-arg-dragonegg-llvm-option="-disable-vect-cmp" to pass this option to LLVM backend through dragonegg.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with gfortran libraries
Instructs the compiler to link with AMD-supported math library
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Generate code for processors that include the AVX extensions.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Allows links to proceed even if there are multiple definitions of some symbols. This switch may resolve duplicate symbol errors, as noted in the 502.gcc_r benchmark description.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Certain loops with breaks maybe vectorized by default at O2 and above. In some extreme situations this may result in unsafe behavior. Use this option to disable vectorization of such loops. Use the option -fplugin-arg-dragonegg-llvm-option="-disable-vect-cmp" to pass this option to LLVM backend through dragonegg.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Enable all optimizations of -O3 plus optimizations that are not valid for standard-compliant programs, such as re-ordering
operations without regard to parentheses.
Many more details are available.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
This option avoids runtime memory dependency checks to enable aggressive vectorization.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Enable all optimizations of -O3 plus optimizations that are not valid for standard-compliant programs, such as re-ordering
operations without regard to parentheses.
Many more details are available.
On x86 systems, allows use of instructions that require the listed architecture.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Passes the option-name through the compiler frontend to the optimizer.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables AVX2 (Advanced Vector Extensions, 2nd generation) support.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined. Use the option -fplugin-arg-dragonegg-llvm-option="-inline-threshold:1000" to pass this option to LLVM backend through dragonegg
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with gfortran libraries
Instructs the compiler to link with AMD-supported math library
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Generate code for processors that include the AVX extensions.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined. Use the option -fplugin-arg-dragonegg-llvm-option="-inline-threshold:1000" to pass this option to LLVM backend through dragonegg
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with gfortran libraries
Instructs the compiler to link with AMD-supported math library
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Enable all optimizations of -O3 plus optimizations that are not valid for standard-compliant programs, such as re-ordering
operations without regard to parentheses.
Many more details are available.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
This option avoids runtime memory dependency checks to enable aggressive vectorization.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
Enables AVX2 (Advanced Vector Extensions, 2nd generation) support.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined. Use the option -fplugin-arg-dragonegg-llvm-option="-inline-threshold:1000" to pass this option to LLVM backend through dragonegg
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with gfortran libraries
Instructs the compiler to link with AMD-supported math library
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Enable all optimizations of -O3 plus optimizations that are not valid for standard-compliant programs, such as re-ordering
operations without regard to parentheses.
Many more details are available.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
This option avoids runtime memory dependency checks to enable aggressive vectorization.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
Note:
-flto requires llvm to be build with gold-linker. The default binary releases of llvm from llvm.org do not have LLVMGold.so and so will not support -flto. To use -flto, you will have to
Download, configure and build binutils for gold with plugin support.
*** $ git clone --depth 1 git://sourceware.org/git/binutils-gdb.git binutils
*** $ mkdir build
*** $ cd build
*** $ ../binutils/configure --enable-gold --enable-plugins --disable-werror
*** $ make all-gold
That should leave you with build/gold/ld-new which supports the -plugin option. Running make will additionally build build/binutils/ar and nm-new binaries supporting plugins.
Build the LLVMgold plugin. Run CMake with -DLLVM_BINUTILS_INCDIR=/path/to/binutils/include. The correct include path will contain the file plugin-api.h.
Replace the existing binutils tools in /usr/bin with the newly built gold enabled binutils tools like ld, nm, ar. It is recommended that you use soft links to back up and replace existing ld, nm, ar with the gold enabled version.
-Wl tells the linker to accept the following argument. In the example, it tells the linker to allow multiple definitions.
-plugin-opt= tells the linker to pass the following argument to the plugin.
The optimization merges duplicate constant uses into a register to reduce instruction width.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop
Enable all optimizations of -O3 plus optimizations that are not valid for standard-compliant programs, such as re-ordering
operations without regard to parentheses.
Many more details are available.
On x86 systems, allows use of instructions that require the listed architecture.
This option transforms the layout of arrays of structure types and its fields to improve the cache locality. Possible values that can be specified are 1,2 and 3 Aggressive analysis and transformations are performed at higher level of transformations, with -fstruct-layout=3 being the most aggressive. Use -fstruct-layout=3 when you know the allocated size of array of structures fits within 64KB. Use the value of 2 when a similar size exceeds 64KB but does not exceed 4GB. The option is effective only under flto as the whole program analysis is required to perform this optimization.
Passes the option-name through the compiler frontend to the optimizer.
This option avoids runtime memory dependency checks to enable aggressive vectorization.
Restricts the optimization and code generation to first-generation AVX instructions.
Sets the limit at which loops will be unrolled. For example,if unroll-threshold is set to 100 then only loops with 100 or less instructions will be unrolled.
The optimization transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Increases optimization levels: the higher the number, the more optimization is done. Higher levels of optimization may
require additional compilation time, in the hopes of reducing execution time. At -O, basic optimizations are performed,
such as constant merging and elimination of dead code. At -O2, additional optimizations are added, such as common
subexpression elimination and strict aliasing. At -O3, even more optimizations are performed, such as function inlining and
vectorization.
Many more details are available.
Enables AVX2 (Advanced Vector Extensions, 2nd generation) support.
Enables the adcx instruction generation support.
Tells the optimizer to unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations.
Load the plugin code in file dragonegg.so, assumed to be a shared object to be dlopen'd by the compiler. In AOCC, DragonEgg is called the "AOCC Fortran Plugin".
Passes the argument list following the flag to the DragonEgg gfortran plugin. Each argument must be enclosed in quotes.
Sets the compiler's inlining threshold level to the value passed as argument. The inline threshold is used in the inliner heuristics to decide which function should be inlined. Use the option -fplugin-arg-dragonegg-llvm-option="-inline-threshold:1000" to pass this option to LLVM backend through dragonegg
Use the jemalloc library, which is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Somewhere between -O0 and -O2.
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective. Level 2 is assumed if no value is specified (i.e. "-O". The default is "-O2".
This result has been formatted using multiple flags files. The "submit command" from each of them appears next.
SPECrate runs might use one of these methods to bind processes to specific processors, depending on the config file.
Linux systems: the numactl command is commonly used. Here is a brief guide to understanding the specific command which will be found in the config file:
Solaris systems: The pbind command is commonly used, via
submit=echo 'pbind -b...' > dobmk; sh dobmk
The specific command may be found in the config file; here is a brief guide to understanding that command:
pbind -b causes this copy's processes to be bound to the CPU specified by the expression that follows it. See the config file used in the run for the exact syntax, which tends to be cumbersome because of the need to carefully quote parts of the expression. When all expressions are evaluated, the jobs are typically distributed evenly across the system, with each chip running the same number of jobs as all other chips, and each core running the same number of jobs as all other cores.
The pbind expression may include various elements from the SPEC toolset and from standard Unix commands, such as:
Using numactl to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.
numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'
Note that some versions of numactl, particularly the version found on SLES 10, we have found that the utility incorrectly interprets application arguments as it's own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as it's own "-m" option. To work around this problem, a user can put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"
No special commands are needed for feedback-directed optimization, other than the compiler profile flags.
This result has been formatted using multiple flags files. The "sw environment" from each of them appears next.
One or more of the following may have been used in the run. If so, it will be listed in the notes sections. Here is a brief guide to understanding them:
LD_LIBRARY_PATH=<directories> (set via config file preENV)
LD_LIBRARY_PATH controls the search order for libraries. Often, it can be defaulted. Sometimes, it is
explicitly set (as documented in the notes in the submission), in order to ensure that the correct versions of
libraries are picked up.
OMP_STACKSIZE=N (set via config file preENV)
Set the stack size for subordinate threads.
ulimit -s N
ulimit -s unlimited
'ulimit' is a Unix commands, entered prior to the run. It sets the stack size for the main process, either
to N kbytes or to no limit.
Transparent Huge Pages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Most recent Linux OS releases have THP enabled by default
Linux Huge Page settings
If you need finer control and manually set the Huge Pages you can follow the below steps:
Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt
ulimit -s <n>
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
ulimit -l <n>
Sets the maximum size of memory that may be locked into physical memory.
OMP_NUM_THREADS
Sets the maximum number of OpenMP parallel threads applications based on OpenMP may use.
powersave -f (on SuSE)
Makes the powersave daemon set the CPUs to the highest supported frequency.
/etc/init.d/cpuspeed stop (on Red Hat)
Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.
LD_LIBRARY_PATH
An environment variable set to include the LLVM, JEMalloc and SmartHeap libraries used during compilation of the binaries. This environment variable setting is not needed when building the binaries on the system under test.
kernel/randomize_va_space
This option can be used to select the type of process address space randomization that is used in the system, for architectures that support this feature.
*** 0 - Turn the process address space randomization off. This is the default for architectures that do not support this feature anyways, and kernels that are booted with the "norandmaps" parameter.
*** 1 - Make the addresses of mmap base, stack and VDSO page randomized. This, among other things, implies that shared libraries will be loaded to random addresses. Also for PIE-linked binaries, the location of code start is randomized. This is the default if the CONFIG_COMPAT_BRK option is enabled.
*** 2 - Additionally enable heap randomization. This is the default if CONFIG_COMPAT_BRK is disabled.
MALLOC_CONF
An environment variable set to tune the jemalloc allocation strategy during the execution of the binaries. This environment variable setting is not needed when building the binaries on the system under test.
Model | Nominal TDP | Minimum cTDP | Maximum cTDP** |
---|---|---|---|
EPYC 7601 | 180W | 165W | 200W |
EPYC 7551 | 180W | 165W | 200W |
EPYC 7501 | 155/170W | 135W | 155/170W* |
EPYC 7451 | 180W | 165W | 200W |
EPYC 7401 | 155/170W | 135W | 155/170W* |
EPYC 7351 | 155/170W | 135W | 155/170W* |
EPYC 7301 | 155/170W | 135W | 155/170W* |
EPYC 7281 | 155/170W | 135W | 155/170W* |
EPYC 7251 | 120W | 105W | 120W |
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2019 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.0.2.
Report generated on 2019-02-21 15:45:06 by SPEC CPU2017 flags formatter v5178.