標籤:

Writing an LLVM Pass — LLVM 3.4 documentation

Writing an LLVM Pass?

  • Introduction — What is a pass?
  • Quick Start — Writing hello world
  • Setting up the build environment
  • Basic code required
  • Running a pass with opt
  • Pass classes and requirements
  • The ImmutablePass class
  • The ModulePass class
  • The runOnModule method
  • The CallGraphSCCPass class
  • The doInitialization(CallGraph &) method
  • The runOnSCC method
  • The doFinalization(CallGraph &) method
  • The FunctionPass class
  • The doInitialization(Module &) method
  • The runOnFunction method
  • The doFinalization(Module &) method
  • The LoopPass class
  • The doInitialization(Loop *, LPPassManager &) method
  • The runOnLoop method
  • The doFinalization() method
  • The RegionPass class
  • The doInitialization(Region *, RGPassManager &) method
  • The runOnRegion method
  • The doFinalization() method
  • The BasicBlockPass class
  • The doInitialization(Function &) method
  • The runOnBasicBlock method
  • The doFinalization(Function &) method
  • The MachineFunctionPass class
  • The runOnMachineFunction(MachineFunction &MF) method
  • Pass registration
  • The print method
  • Specifying interactions between passes
  • The getAnalysisUsage method
  • The AnalysisUsage::addRequired<> and AnalysisUsage::addRequiredTransitive<> methods
  • The AnalysisUsage::addPreserved<> method
  • Example implementations of getAnalysisUsage
  • The getAnalysis<> and getAnalysisIfAvailable<> methods
  • Implementing Analysis Groups
  • Analysis Group Concepts
  • Using RegisterAnalysisGroup
  • Pass Statistics
  • What PassManager does
  • The releaseMemory method
  • Registering dynamically loaded passes
  • Using existing registries
  • Creating new registries
  • Using GDB with dynamically loaded passes
  • Setting a breakpoint in your pass
  • Miscellaneous Problems
  • Future extensions planned
  • Multithreaded LLVM
  • Introduction — What is a pass??

    The LLVM Pass Framework is an important part of the LLVM system, because LLVMpasses are where most of the interesting parts of the compiler exist. Passesperform the transformations and optimizations that make up the compiler, theybuild the analysis results that are used by these transformations, and theyare, above all, a structuring technique for compiler code.

    All LLVM passes are subclasses of the Pass class, which implementfunctionality by overriding virtual methods inherited from Pass. Dependingon how your pass works, you should inherit from the ModulePass , CallGraphSCCPass, FunctionPass , or LoopPass, or RegionPass, or BasicBlockPass classes, which gives the system moreinformation about what your pass does, and how it can be combined with otherpasses. One of the main features of the LLVM Pass Framework is that itschedules passes to run in an efficient way based on the constraints that yourpass meets (which are indicated by which class they derive from).

    We start by showing you how to construct a pass, everything from setting up thecode, to compiling, loading, and executing it. After the basics are down, moreadvanced features are discussed.

    Quick Start — Writing hello world?

    Here we describe how to write the 「hello world」 of passes. The 「Hello」 pass isdesigned to simply print out the name of non-external functions that exist inthe program being compiled. It does not modify the program at all, it justinspects it. The source code and files for this pass are available in the LLVMsource tree in the lib/Transforms/Hello directory.

    Setting up the build environment?

    First, configure and build LLVM. This needs to be done directly inside theLLVM source tree rather than in a separate objects directory. Next, you needto create a new directory somewhere in the LLVM source base. For this example,we』ll assume that you made lib/Transforms/Hello. Finally, you must set upa build script (Makefile) that will compile the source code for the newpass. To do this, copy the following into Makefile:

    # Makefile for hello pass# Path to top level of LLVM hierarchyLEVEL = ../../..# Name of the library to buildLIBRARYNAME = Hello# Make the shared library become a loadable module so the tools can# dlopen/dlsym on the resulting library.LOADABLE_MODULE = 1# Include the makefile implementation stuffinclude $(LEVEL)/Makefile.common

    This makefile specifies that all of the .cpp files in the current directoryare to be compiled and linked together into a shared object$(LEVEL)/Debug+Asserts/lib/Hello.so that can be dynamically loaded by theopt or bugpoint tools via their -load options.If your operating system uses a suffix other than .so (such as Windows or MacOS X), the appropriate extension will be used.

    If you are used CMake to build LLVM, see Developing LLVM pass out of source.

    Now that we have the build scripts set up, we just need to write the code forthe pass itself.

    Basic code required?

    Now that we have a way to compile our new pass, we just have to write it.Start out with:

    #include "llvm/Pass.h"#include "llvm/IR/Function.h"#include "llvm/Support/raw_ostream.h"

    Which are needed because we are writing a Pass, we are operating onFunctions, and we willbe doing some printing.

    Next we have:

    using namespace llvm;

    ... which is required because the functions from the include files live in thellvm namespace.

    Next we have:

    namespace {

    ... which starts out an anonymous namespace. Anonymous namespaces are to C++what the 「static」 keyword is to C (at global scope). It makes the thingsdeclared inside of the anonymous namespace visible only to the current file.If you』re not familiar with them, consult a decent C++ book for moreinformation.

    Next, we declare our pass itself:

    struct Hello : public FunctionPass {

    This declares a 「Hello」 class that is a subclass of FunctionPass. The different builtin pass subclassesare described in detail later, butfor now, know that FunctionPass operates on a function at a time.

    static char ID;Hello() : FunctionPass(ID) {}

    This declares pass identifier used by LLVM to identify pass. This allows LLVMto avoid using expensive C++ runtime information.

    virtual bool runOnFunction(Function &F) { errs() << "Hello: "; errs().write_escaped(F.getName()) << "
    "; return false; } }; // end of struct Hello} // end of anonymous namespace

    We declare a runOnFunction method,which overrides an abstract virtual method inherited from FunctionPass. This is where we are supposed to do ourthing, so we just print out our message with the name of each function.

    char Hello::ID = 0;

    We initialize pass ID here. LLVM uses ID』s address to identify a pass, soinitialization value is not important.

    static RegisterPass<Hello> X("hello", "Hello World Pass", false /* Only looks at CFG */, false /* Analysis Pass */);

    Lastly, we register our classHello, giving it a command line argument 「hello」, and a name 「HelloWorld Pass」. The last two arguments describe its behavior: if a pass walks CFGwithout modifying it then the third argument is set to true; if a pass isan analysis pass, for example dominator tree pass, then true is supplied asthe fourth argument.

    As a whole, the .cpp file looks like:

    #include "llvm/Pass.h"#include "llvm/IR/Function.h"#include "llvm/Support/raw_ostream.h"using namespace llvm;namespace { struct Hello : public FunctionPass { static char ID; Hello() : FunctionPass(ID) {} virtual bool runOnFunction(Function &F) { errs() << "Hello: "; errs().write_escaped(F.getName()) << "
    "; return false; } };}char Hello::ID = 0;static RegisterPass<Hello> X("hello", "Hello World Pass", false, false);

    Now that it』s all together, compile the file with a simple 「gmake」 commandin the local directory and you should get a new file「Debug+Asserts/lib/Hello.so」 under the top level directory of the LLVMsource tree (not in the local directory). Note that everything in this file iscontained in an anonymous namespace — this reflects the fact that passesare self contained units that do not need external interfaces (although theycan have them) to be useful.

    Running a pass with opt?

    Now that you have a brand new shiny shared object file, we can use theopt command to run an LLVM program through your pass. Because youregistered your pass with RegisterPass, you will be able to use theopt tool to access it, once loaded.

    To test it, follow the example at the end of the Getting Started with the LLVM System tocompile 「Hello World」 to LLVM. We can now run the bitcode file (hello.bc) forthe program through our transformation like this (or course, any bitcode filewill work):

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -hello < hello.bc > /dev/nullHello: __mainHello: putsHello: main

    The -load option specifies that opt should load your passas a shared object, which makes 「-hello」 a valid command line argument(which is one reason you need to register your pass). Because the Hello pass does not modifythe program in any interesting way, we just throw away the result ofopt (sending it to /dev/null).

    To see what happened to the other string you registered, try runningopt with the -help option:

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -helpOVERVIEW: llvm .bc -> .bc modular optimizerUSAGE: opt [options] <input bitcode>OPTIONS: Optimizations available:... -globalopt - Global Variable Optimizer -globalsmodref-aa - Simple mod/ref analysis for globals -gvn - Global Value Numbering -hello - Hello World Pass -indvars - Induction Variable Simplification -inline - Function Integration/Inlining -insert-edge-profiling - Insert instrumentation for edge profiling...

    The pass name gets added as the information string for your pass, giving somedocumentation to users of opt. Now that you have a working pass,you would go ahead and make it do the cool transformations you want. Once youget it all working and tested, it may become useful to find out how fast yourpass is. The PassManager provides anice command line option (--time-passes) that allows you to getinformation about the execution time of your pass along with the other passesyou queue up. For example:

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -hello -time-passes < hello.bc > /dev/nullHello: __mainHello: putsHello: main=============================================================================== ... Pass execution timing report ...=============================================================================== Total Execution Time: 0.02 seconds (0.0479059 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Pass Name --- 0.0100 (100.0%) 0.0000 ( 0.0%) 0.0100 ( 50.0%) 0.0402 ( 84.0%) Bitcode Writer 0.0000 ( 0.0%) 0.0100 (100.0%) 0.0100 ( 50.0%) 0.0031 ( 6.4%) Dominator Set Construction 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0013 ( 2.7%) Module Verifier 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0033 ( 6.9%) Hello World Pass 0.0100 (100.0%) 0.0100 (100.0%) 0.0200 (100.0%) 0.0479 (100.0%) TOTAL

    As you can see, our implementation above is pretty fast. The additionalpasses listed are automatically inserted by the opt tool to verifythat the LLVM emitted by your pass is still valid and well formed LLVM, whichhasn』t been broken somehow.

    Now that you have seen the basics of the mechanics behind passes, we can talkabout some more details of how they work and how to use them.

    Pass classes and requirements?

    One of the first things that you should do when designing a new pass is todecide what class you should subclass for your pass. The Hello World example uses the FunctionPass class for its implementation, but we didnot discuss why or when this should occur. Here we talk about the classesavailable, from the most general to the most specific.

    When choosing a superclass for your Pass, you should choose the mostspecific class possible, while still being able to meet the requirementslisted. This gives the LLVM Pass Infrastructure information necessary tooptimize how passes are run, so that the resultant compiler isn』t unnecessarilyslow.

    The ImmutablePass class?

    The most plain and boring type of pass is the 「ImmutablePass」 class. This passtype is used for passes that do not have to be run, do not change state, andnever need to be updated. This is not a normal type of transformation oranalysis, but can provide information about the current compiler configuration.

    Although this pass class is very infrequently used, it is important forproviding information about the current target machine being compiled for, andother static information that can affect the various transformations.

    ImmutablePasses never invalidate other transformations, are neverinvalidated, and are never 「run」.

    The ModulePass class?

    The ModulePass classis the most general of all superclasses that you can use. Deriving fromModulePass indicates that your pass uses the entire program as a unit,referring to function bodies in no predictable order, or adding and removingfunctions. Because nothing is known about the behavior of ModulePasssubclasses, no optimization can be done for their execution.

    A module pass can use function level passes (e.g. dominators) using thegetAnalysis interface getAnalysis<DominatorTree>(llvm::Function *) toprovide the function to retrieve analysis result for, if the function pass doesnot require any module or immutable passes. Note that this can only be donefor functions for which the analysis ran, e.g. in the case of dominators youshould only ask for the DominatorTree for function definitions, notdeclarations.

    To write a correct ModulePass subclass, derive from ModulePass andoverload the runOnModule method with the following signature:

    The runOnModule method?

    virtual bool runOnModule(Module &M) = 0;

    The runOnModule method performs the interesting work of the pass. Itshould return true if the module was modified by the transformation andfalse otherwise.

    The CallGraphSCCPass class?

    The CallGraphSCCPass is used bypasses that need to traverse the program bottom-up on the call graph (calleesbefore callers). Deriving from CallGraphSCCPass provides some mechanicsfor building and traversing the CallGraph, but also allows the system tooptimize execution of CallGraphSCCPasses. If your pass meets therequirements outlined below, and doesn』t meet the requirements of aFunctionPass or BasicBlockPass, you should derive fromCallGraphSCCPass.

    TODO: explain briefly what SCC, Tarjan』s algo, and B-U mean.

    To be explicit, CallGraphSCCPass subclasses are:

    1. ... not allowed to inspect or modify any Functions other than thosein the current SCC and the direct callers and direct callees of the SCC.
    2. ... required to preserve the current CallGraph object, updating it toreflect any changes made to the program.
    3. ... not allowed to add or remove SCC』s from the current Module, thoughthey may change the contents of an SCC.
    4. ... allowed to add or remove global variables from the current Module.
    5. ... allowed to maintain state across invocations of runOnSCC (including global data).

    Implementing a CallGraphSCCPass is slightly tricky in some cases because ithas to handle SCCs with more than one node in it. All of the virtual methodsdescribed below should return true if they modified the program, orfalse if they didn』t.

    The doInitialization(CallGraph &) method?

    virtual bool doInitialization(CallGraph &CG);

    The doInitialization method is allowed to do most of the things thatCallGraphSCCPasses are not allowed to do. They can add and removefunctions, get pointers to functions, etc. The doInitialization method isdesigned to do simple initialization type of stuff that does not depend on theSCCs being processed. The doInitialization method call is not scheduled tooverlap with any other pass executions (thus it should be very fast).

    The runOnSCC method?

    virtual bool runOnSCC(CallGraphSCC &SCC) = 0;

    The runOnSCC method performs the interesting work of the pass, and shouldreturn true if the module was modified by the transformation, falseotherwise.

    The doFinalization(CallGraph &) method?

    virtual bool doFinalization(CallGraph &CG);

    The doFinalization method is an infrequently used method that is calledwhen the pass framework has finished calling runOnFunction for every function in the program beingcompiled.

    The FunctionPass class?

    In contrast to ModulePass subclasses, FunctionPass subclasses do have apredictable, local behavior that can be expected by the system. AllFunctionPass execute on each function in the program independent of all ofthe other functions in the program. FunctionPasses do not require thatthey are executed in a particular order, and FunctionPasses do not modifyexternal functions.

    To be explicit, FunctionPass subclasses are not allowed to:

    1. Inspect or modify a Function other than the one currently being processed.
    2. Add or remove Functions from the current Module.
    3. Add or remove global variables from the current Module.
    4. Maintain state across invocations of:ref:runOnFunction<writing-an-llvm-pass-runOnFunction> (including global data).

    Implementing a FunctionPass is usually straightforward (See the HelloWorld pass for example).FunctionPasses may overload three virtual methods to do their work. Allof these methods should return true if they modified the program, orfalse if they didn』t.

    The doInitialization(Module &) method?

    virtual bool doInitialization(Module &M);

    The doInitialization method is allowed to do most of the things thatFunctionPasses are not allowed to do. They can add and remove functions,get pointers to functions, etc. The doInitialization method is designed todo simple initialization type of stuff that does not depend on the functionsbeing processed. The doInitialization method call is not scheduled tooverlap with any other pass executions (thus it should be very fast).

    A good example of how this method should be used is the LowerAllocations pass. This passconverts malloc and free instructions into platform dependentmalloc() and free() function calls. It uses the doInitializationmethod to get a reference to the malloc and free functions that itneeds, adding prototypes to the module if necessary.

    The runOnFunction method?

    virtual bool runOnFunction(Function &F) = 0;

    The runOnFunction method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true valueshould be returned if the function is modified.

    The doFinalization(Module &) method?

    virtual bool doFinalization(Module &M);

    The doFinalization method is an infrequently used method that is calledwhen the pass framework has finished calling runOnFunction for every function in the program beingcompiled.

    The LoopPass class?

    All LoopPass execute on each loop in the function independent of all of theother loops in the function. LoopPass processes loops in loop nest ordersuch that outer most loop is processed last.

    LoopPass subclasses are allowed to update loop nest using LPPassManagerinterface. Implementing a loop pass is usually straightforward.LoopPasses may overload three virtual methods to do their work. Allthese methods should return true if they modified the program, or falseif they didn』t.

    The doInitialization(Loop *, LPPassManager &) method?

    virtual bool doInitialization(Loop *, LPPassManager &LPM);

    The doInitialization method is designed to do simple initialization type ofstuff that does not depend on the functions being processed. ThedoInitialization method call is not scheduled to overlap with any otherpass executions (thus it should be very fast). LPPassManager interfaceshould be used to access Function or Module level analysis information.

    The runOnLoop method?

    virtual bool runOnLoop(Loop *, LPPassManager &LPM) = 0;

    The runOnLoop method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true valueshould be returned if the function is modified. LPPassManager interfaceshould be used to update loop nest.

    The doFinalization() method?

    virtual bool doFinalization();

    The doFinalization method is an infrequently used method that is calledwhen the pass framework has finished calling runOnLoop for every loop in the program being compiled.

    The RegionPass class?

    RegionPass is similar to LoopPass,but executes on each single entry single exit region in the function.RegionPass processes regions in nested order such that the outer mostregion is processed last.

    RegionPass subclasses are allowed to update the region tree by using theRGPassManager interface. You may overload three virtual methods ofRegionPass to implement your own region pass. All these methods shouldreturn true if they modified the program, or false if they did not.

    The doInitialization(Region *, RGPassManager &) method?

    virtual bool doInitialization(Region *, RGPassManager &RGM);

    The doInitialization method is designed to do simple initialization type ofstuff that does not depend on the functions being processed. ThedoInitialization method call is not scheduled to overlap with any otherpass executions (thus it should be very fast). RPPassManager interfaceshould be used to access Function or Module level analysis information.

    The runOnRegion method?

    virtual bool runOnRegion(Region *, RGPassManager &RGM) = 0;

    The runOnRegion method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true value should bereturned if the region is modified. RGPassManager interface should be used toupdate region tree.

    The doFinalization() method?

    virtual bool doFinalization();

    The doFinalization method is an infrequently used method that is calledwhen the pass framework has finished calling runOnRegion for every region in the program beingcompiled.

    The BasicBlockPass class?

    BasicBlockPasses are just like FunctionPass』s , except that they must limit their scopeof inspection and modification to a single basic block at a time. As such,they are not allowed to do any of the following:

    1. Modify or inspect any basic blocks outside of the current one.
    2. Maintain state across invocations of runOnBasicBlock.
    3. Modify the control flow graph (by altering terminator instructions)
    4. Any of the things forbidden for FunctionPasses.

    BasicBlockPasses are useful for traditional local and 「peephole」optimizations. They may override the same doInitialization(Module &) and doFinalization(Module &) methods that FunctionPass』s have, but also have the following virtualmethods that may also be implemented:

    The doInitialization(Function &) method?

    virtual bool doInitialization(Function &F);

    The doInitialization method is allowed to do most of the things thatBasicBlockPasses are not allowed to do, but that FunctionPassescan. The doInitialization method is designed to do simple initializationthat does not depend on the BasicBlocks being processed. ThedoInitialization method call is not scheduled to overlap with any otherpass executions (thus it should be very fast).

    The runOnBasicBlock method?

    virtual bool runOnBasicBlock(BasicBlock &BB) = 0;

    Override this function to do the work of the BasicBlockPass. This functionis not allowed to inspect or modify basic blocks other than the parameter, andare not allowed to modify the CFG. A true value must be returned if thebasic block is modified.

    The doFinalization(Function &) method?

    virtual bool doFinalization(Function &F);

    The doFinalization method is an infrequently used method that is calledwhen the pass framework has finished calling runOnBasicBlock for every BasicBlock in the programbeing compiled. This can be used to perform per-function finalization.

    The MachineFunctionPass class?

    A MachineFunctionPass is a part of the LLVM code generator that executes onthe machine-dependent representation of each LLVM function in the program.

    Code generator passes are registered and initialized specially byTargetMachine::addPassesToEmitFile and similar routines, so they cannotgenerally be run from the opt or bugpoint commands.

    A MachineFunctionPass is also a FunctionPass, so all the restrictionsthat apply to a FunctionPass also apply to it. MachineFunctionPassesalso have additional restrictions. In particular, MachineFunctionPassesare not allowed to do any of the following:

    1. Modify or create any LLVM IR Instructions, BasicBlocks,Arguments, Functions, GlobalVariables,GlobalAliases, or Modules.
    2. Modify a MachineFunction other than the one currently being processed.
    3. Maintain state across invocations of runOnMachineFunction (including global data).

    The runOnMachineFunction(MachineFunction &MF) method?

    virtual bool runOnMachineFunction(MachineFunction &MF) = 0;

    runOnMachineFunction can be considered the main entry point of aMachineFunctionPass; that is, you should override this method to do thework of your MachineFunctionPass.

    The runOnMachineFunction method is called on every MachineFunction in aModule, so that the MachineFunctionPass may perform optimizations onthe machine-dependent representation of the function. If you want to get atthe LLVM Function for the MachineFunction you』re working on, useMachineFunction『s getFunction() accessor method — but remember, youmay not modify the LLVM Function or its contents from aMachineFunctionPass.

    Pass registration?

    In the Hello World example pass weillustrated how pass registration works, and discussed some of the reasons thatit is used and what it does. Here we discuss how and why passes areregistered.

    As we saw above, passes are registered with the RegisterPass template. Thetemplate parameter is the name of the pass that is to be used on the commandline to specify that the pass should be added to a program (for example, withopt or bugpoint). The first argument is the name of thepass, which is to be used for the -help output of programs, as wellas for debug output generated by the --debug-pass option.

    If you want your pass to be easily dumpable, you should implement the virtualprint method:

    The print method?

    virtual void print(llvm::raw_ostream &O, const Module *M) const;

    The print method must be implemented by 「analyses」 in order to print ahuman readable version of the analysis results. This is useful for debuggingan analysis itself, as well as for other people to figure out how an analysisworks. Use the opt -analyze argument to invoke this method.

    The llvm::raw_ostream parameter specifies the stream to write the resultson, and the Module parameter gives a pointer to the top level module of theprogram that has been analyzed. Note however that this pointer may be NULLin certain circumstances (such as calling the Pass::dump() from adebugger), so it should only be used to enhance debug output, it should not bedepended on.

    Specifying interactions between passes?

    One of the main responsibilities of the PassManager is to make sure thatpasses interact with each other correctly. Because PassManager tries tooptimize the execution of passes itmust know how the passes interact with each other and what dependencies existbetween the various passes. To track this, each pass can declare the set ofpasses that are required to be executed before the current pass, and the passeswhich are invalidated by the current pass.

    Typically this functionality is used to require that analysis results arecomputed before your pass is run. Running arbitrary transformation passes caninvalidate the computed analysis results, which is what the invalidation setspecifies. If a pass does not implement the getAnalysisUsage method, it defaults to not having anyprerequisite passes, and invalidating all other passes.

    The getAnalysisUsage method?

    virtual void getAnalysisUsage(AnalysisUsage &Info) const;

    By implementing the getAnalysisUsage method, the required and invalidatedsets may be specified for your transformation. The implementation should fillin the AnalysisUsage object withinformation about which passes are required and not invalidated. To do this, apass may call any of the following methods on the AnalysisUsage object:

    The AnalysisUsage::addRequired<> and AnalysisUsage::addRequiredTransitive<> methods?

    If your pass requires a previous pass to be executed (an analysis for example),it can use one of these methods to arrange for it to be run before your pass.LLVM has many different types of analyses and passes that can be required,spanning the range from DominatorSet to BreakCriticalEdges. RequiringBreakCriticalEdges, for example, guarantees that there will be no criticaledges in the CFG when your pass has been run.

    Some analyses chain to other analyses to do their job. For example, anAliasAnalysis <AliasAnalysis> implementation is required to chain to other alias analysis passes. In cases whereanalyses chain, the addRequiredTransitive method should be used instead ofthe addRequired method. This informs the PassManager that thetransitively required pass should be alive as long as the requiring pass is.

    The AnalysisUsage::addPreserved<> method?

    One of the jobs of the PassManager is to optimize how and when analyses arerun. In particular, it attempts to avoid recomputing data unless it needs to.For this reason, passes are allowed to declare that they preserve (i.e., theydon』t invalidate) an existing analysis if it』s available. For example, asimple constant folding pass would not modify the CFG, so it can』t possiblyaffect the results of dominator analysis. By default, all passes are assumedto invalidate all others.

    The AnalysisUsage class provides several methods which are useful incertain circumstances that are related to addPreserved. In particular, thesetPreservesAll method can be called to indicate that the pass does notmodify the LLVM program at all (which is true for analyses), and thesetPreservesCFG method can be used by transformations that changeinstructions in the program but do not modify the CFG or terminatorinstructions (note that this property is implicitly set forBasicBlockPasses).

    addPreserved is particularly useful for transformations likeBreakCriticalEdges. This pass knows how to update a small set of loop anddominator related analyses if they exist, so it can preserve them, despite thefact that it hacks on the CFG.

    Example implementations of getAnalysisUsage?

    // This example modifies the program, but does not modify the CFGvoid LICM::getAnalysisUsage(AnalysisUsage &AU) const { AU.setPreservesCFG(); AU.addRequired<LoopInfo>();}

    The getAnalysis<> and getAnalysisIfAvailable<> methods?

    The Pass::getAnalysis<> method is automatically inherited by your class,providing you with access to the passes that you declared that you requiredwith the getAnalysisUsagemethod. It takes a single template argument that specifies which pass classyou want, and returns a reference to that pass. For example:

    bool LICM::runOnFunction(Function &F) { LoopInfo &LI = getAnalysis<LoopInfo>(); //...}

    This method call returns a reference to the pass desired. You may get aruntime assertion failure if you attempt to get an analysis that you did notdeclare as required in your getAnalysisUsage implementation. This method can becalled by your run* method implementation, or by any other local methodinvoked by your run* method.

    A module level pass can use function level analysis info using this interface.For example:

    bool ModuleLevelPass::runOnModule(Module &M) { //... DominatorTree &DT = getAnalysis<DominatorTree>(Func); //...}

    In above example, runOnFunction for DominatorTree is called by passmanager before returning a reference to the desired pass.

    If your pass is capable of updating analyses if they exist (e.g.,BreakCriticalEdges, as described above), you can use thegetAnalysisIfAvailable method, which returns a pointer to the analysis ifit is active. For example:

    if (DominatorSet *DS = getAnalysisIfAvailable<DominatorSet>()) { // A DominatorSet is active. This code will update it.}

    Implementing Analysis Groups?

    Now that we understand the basics of how passes are defined, how they are used,and how they are required from other passes, it』s time to get a little bitfancier. All of the pass relationships that we have seen so far are verysimple: one pass depends on one other specific pass to be run before it canrun. For many applications, this is great, for others, more flexibility isrequired.

    In particular, some analyses are defined such that there is a single simpleinterface to the analysis results, but multiple ways of calculating them.Consider alias analysis for example. The most trivial alias analysis returns「may alias」 for any alias query. The most sophisticated analysis aflow-sensitive, context-sensitive interprocedural analysis that can take asignificant amount of time to execute (and obviously, there is a lot of roombetween these two extremes for other implementations). To cleanly supportsituations like this, the LLVM Pass Infrastructure supports the notion ofAnalysis Groups.

    Analysis Group Concepts?

    An Analysis Group is a single simple interface that may be implemented bymultiple different passes. Analysis Groups can be given human readable namesjust like passes, but unlike passes, they need not derive from the Passclass. An analysis group may have one or more implementations, one of which isthe 「default」 implementation.

    Analysis groups are used by client passes just like other passes are: theAnalysisUsage::addRequired() and Pass::getAnalysis() methods. In orderto resolve this requirement, the PassManager scans the available passes to see if anyimplementations of the analysis group are available. If none is available, thedefault implementation is created for the pass to use. All standard rules forinteraction between passes stillapply.

    Although Pass Registration isoptional for normal passes, all analysis group implementations must beregistered, and must use the INITIALIZE_AG_PASS template to join theimplementation pool. Also, a default implementation of the interface mustbe registered with RegisterAnalysisGroup.

    As a concrete example of an Analysis Group in action, consider theAliasAnalysisanalysis group. The default implementation of the alias analysis interface(the basicaa pass)just does a few simple checks that don』t require significant analysis tocompute (such as: two different globals can never alias each other, etc).Passes that use the AliasAnalysis interface (forexample the gcse pass), do notcare which implementation of alias analysis is actually provided, they just usethe designated interface.

    From the user』s perspective, commands work just like normal. Issuing thecommand opt -gcse ... will cause the basicaa class to be instantiatedand added to the pass sequence. Issuing the command opt -somefancyaa -gcse... will cause the gcse pass to use the somefancyaa alias analysis(which doesn』t actually exist, it』s just a hypothetical example) instead.

    Using RegisterAnalysisGroup?

    The RegisterAnalysisGroup template is used to register the analysis groupitself, while the INITIALIZE_AG_PASS is used to add pass implementations tothe analysis group. First, an analysis group should be registered, with ahuman readable name provided for it. Unlike registration of passes, there isno command line argument to be specified for the Analysis Group Interfaceitself, because it is 「abstract」:

    static RegisterAnalysisGroup<AliasAnalysis> A("Alias Analysis");

    Once the analysis is registered, passes can declare that they are validimplementations of the interface by using the following code:

    namespace { // Declare that we implement the AliasAnalysis interface INITIALIZE_AG_PASS(FancyAA, AliasAnalysis , "somefancyaa", "A more complex alias analysis implementation", false, // Is CFG Only? true, // Is Analysis? false); // Is default Analysis Group implementation?}

    This just shows a class FancyAA that uses the INITIALIZE_AG_PASS macroboth to register and to 「join」 the AliasAnalysis analysis group.Every implementation of an analysis group should join using this macro.

    namespace { // Declare that we implement the AliasAnalysis interface INITIALIZE_AG_PASS(BasicAA, AliasAnalysis, "basicaa", "Basic Alias Analysis (default AA impl)", false, // Is CFG Only? true, // Is Analysis? true); // Is default Analysis Group implementation?}

    Here we show how the default implementation is specified (using the finalargument to the INITIALIZE_AG_PASS template). There must be exactly onedefault implementation available at all times for an Analysis Group to be used.Only default implementation can derive from ImmutablePass. Here we declarethat the BasicAliasAnalysis pass is the defaultimplementation for the interface.

    Pass Statistics?

    The Statistic class isdesigned to be an easy way to expose various success metrics from passes.These statistics are printed at the end of a run, when the -statscommand line option is enabled on the command line. See the Statisticssection in the Programmer』s Manual for details.

    What PassManager does?

    The PassManager class takes a list ofpasses, ensures their prerequisitesare set up correctly, and then schedules passes to run efficiently. All of theLLVM tools that run passes use the PassManager for execution of these passes.

    The PassManager does two main things to try to reduce the execution time of aseries of passes:

    1. Share analysis results. The PassManager attempts to avoidrecomputing analysis results as much as possible. This means keeping trackof which analyses are available already, which analyses get invalidated, andwhich analyses are needed to be run for a pass. An important part of workis that the PassManager tracks the exact lifetime of all analysisresults, allowing it to free memory allocated to holding analysis resultsas soon as they are no longer needed.

    2. Pipeline the execution of passes on the program. The PassManagerattempts to get better cache and memory usage behavior out of a series ofpasses by pipelining the passes together. This means that, given a seriesof consecutive FunctionPass, itwill execute all of the FunctionPass on the first function, then all of theFunctionPasses on the secondfunction, etc... until the entire program has been run through the passes.

      This improves the cache behavior of the compiler, because it is onlytouching the LLVM program representation for a single function at a time,instead of traversing the entire program. It reduces the memory consumptionof compiler, because, for example, only one DominatorSet needs to becalculated at a time. This also makes it possible to implement someinteresting enhancements in the future.

    The effectiveness of the PassManager is influenced directly by how muchinformation it has about the behaviors of the passes it is scheduling. Forexample, the 「preserved」 set is intentionally conservative in the face of anunimplemented getAnalysisUsagemethod. Not implementing when it should be implemented will have the effect ofnot allowing any analysis results to live across the execution of your pass.

    The PassManager class exposes a --debug-pass command line options thatis useful for debugging pass execution, seeing how things work, and diagnosingwhen you should be preserving more analyses than you currently are. (To getinformation about all of the variants of the --debug-pass option, just type「opt -help-hidden」).

    By using the –debug-pass=Structure option, for example, we can see how ourHello World pass interacts with otherpasses. Lets try it out with the gcse and licm passes:

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -gcse -licm --debug-pass=Structure < hello.bc > /dev/nullModule Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination-- Immediate Dominators Construction-- Global Common Subexpression Elimination Natural Loop Construction Loop Invariant Code Motion-- Natural Loop Construction-- Loop Invariant Code Motion Module Verifier-- Dominator Set Construction-- Module Verifier Bitcode Writer--Bitcode Writer

    This output shows us when passes are constructed and when the analysis resultsare known to be dead (prefixed with 「--」). Here we see that GCSE usesdominator and immediate dominator information to do its job. The LICM passuses natural loop information, which uses dominator sets, but not immediatedominators. Because immediate dominators are no longer useful after the GCSEpass, it is immediately destroyed. The dominator sets are then reused tocompute natural loop information, which is then used by the LICM pass.

    After the LICM pass, the module verifier runs (which is automatically added bythe opt tool), which uses the dominator set to check that theresultant LLVM code is well formed. After it finishes, the dominator setinformation is destroyed, after being computed once, and shared by threepasses.

    Lets see how this changes when we run the Hello World pass in between the two passes:

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -gcse -hello -licm --debug-pass=Structure < hello.bc > /dev/nullModule Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination-- Dominator Set Construction-- Immediate Dominators Construction-- Global Common Subexpression Elimination Hello World Pass-- Hello World Pass Dominator Set Construction Natural Loop Construction Loop Invariant Code Motion-- Natural Loop Construction-- Loop Invariant Code Motion Module Verifier-- Dominator Set Construction-- Module Verifier Bitcode Writer--Bitcode WriterHello: __mainHello: putsHello: main

    Here we see that the Hello World passhas killed the Dominator Set pass, even though it doesn』t modify the code atall! To fix this, we need to add the following getAnalysisUsage method to our pass:

    // We don"t modify the program, so we preserve all analysesvirtual void getAnalysisUsage(AnalysisUsage &AU) const { AU.setPreservesAll();}

    Now when we run our pass, we get this output:

    $ opt -load ../../../Debug+Asserts/lib/Hello.so -gcse -hello -licm --debug-pass=Structure < hello.bc > /dev/nullPass Arguments: -gcse -hello -licmModule Pass Manager Function Pass Manager Dominator Set Construction Immediate Dominators Construction Global Common Subexpression Elimination-- Immediate Dominators Construction-- Global Common Subexpression Elimination Hello World Pass-- Hello World Pass Natural Loop Construction Loop Invariant Code Motion-- Loop Invariant Code Motion-- Natural Loop Construction Module Verifier-- Dominator Set Construction-- Module Verifier Bitcode Writer--Bitcode WriterHello: __mainHello: putsHello: main

    Which shows that we don』t accidentally invalidate dominator informationanymore, and therefore do not have to compute it twice.

    The releaseMemory method?

    virtual void releaseMemory();

    The PassManager automatically determines when to compute analysis results,and how long to keep them around for. Because the lifetime of the pass objectitself is effectively the entire duration of the compilation process, we needsome way to free analysis results when they are no longer useful. ThereleaseMemory virtual method is the way to do this.

    If you are writing an analysis or any other pass that retains a significantamount of state (for use by another pass which 「requires」 your pass and usesthe getAnalysis method) you shouldimplement releaseMemory to, well, release the memory allocated to maintainthis internal state. This method is called after the run* method for theclass, before the next call of run* in your pass.

    Registering dynamically loaded passes?

    Size matters when constructing production quality tools using LLVM, both forthe purposes of distribution, and for regulating the resident code size whenrunning on the target system. Therefore, it becomes desirable to selectivelyuse some passes, while omitting others and maintain the flexibility to changeconfigurations later on. You want to be able to do all this, and, providefeedback to the user. This is where pass registration comes into play.

    The fundamental mechanisms for pass registration are theMachinePassRegistry class and subclasses of MachinePassRegistryNode.

    An instance of MachinePassRegistry is used to maintain a list ofMachinePassRegistryNode objects. This instance maintains the list andcommunicates additions and deletions to the command line interface.

    An instance of MachinePassRegistryNode subclass is used to maintaininformation provided about a particular pass. This information includes thecommand line name, the command help string and the address of the function usedto create an instance of the pass. A global static constructor of one of theseinstances registers with a corresponding MachinePassRegistry, the staticdestructor unregisters. Thus a pass that is statically linked in the toolwill be registered at start up. A dynamically loaded pass will register onload and unregister at unload.

    Using existing registries?

    There are predefined registries to track instruction scheduling(RegisterScheduler) and register allocation (RegisterRegAlloc) machinepasses. Here we will describe how to register a register allocator machinepass.

    Implement your register allocator machine pass. In your register allocator.cpp file add the following include:

    #include "llvm/CodeGen/RegAllocRegistry.h"

    Also in your register allocator .cpp file, define a creator function in theform:

    FunctionPass *createMyRegisterAllocator() { return new MyRegisterAllocator();}

    Note that the signature of this function should match the type ofRegisterRegAlloc::FunctionPassCtor. In the same file add the 「installing」declaration, in the form:

    static RegisterRegAlloc myRegAlloc("myregalloc", "my register allocator help string", createMyRegisterAllocator);

    Note the two spaces prior to the help string produces a tidy result on the-help query.

    $ llc -help ... -regalloc - Register allocator to use (default=linearscan) =linearscan - linear scan register allocator =local - local register allocator =simple - simple register allocator =myregalloc - my register allocator help string ...

    And that』s it. The user is now free to use -regalloc=myregalloc as anoption. Registering instruction schedulers is similar except use theRegisterScheduler class. Note that theRegisterScheduler::FunctionPassCtor is significantly different fromRegisterRegAlloc::FunctionPassCtor.

    To force the load/linking of your register allocator into thellc/lli tools, add your creator function』s globaldeclaration to Passes.h and add a 「pseudo」 call line tollvm/Codegen/LinkAllCodegenComponents.h.

    Creating new registries?

    The easiest way to get started is to clone one of the existing registries; werecommend llvm/CodeGen/RegAllocRegistry.h. The key things to modify arethe class name and the FunctionPassCtor type.

    Then you need to declare the registry. Example: if your pass registry isRegisterMyPasses then define:

    MachinePassRegistry RegisterMyPasses::Registry;

    And finally, declare the command line option for your passes. Example:

    cl::opt<RegisterMyPasses::FunctionPassCtor, false, RegisterPassParser<RegisterMyPasses> >MyPassOpt("mypass", cl::init(&createDefaultMyPass), cl::desc("my pass option help"));

    Here the command option is 「mypass」, with createDefaultMyPass as thedefault creator.

    Using GDB with dynamically loaded passes?

    Unfortunately, using GDB with dynamically loaded passes is not as easy as itshould be. First of all, you can』t set a breakpoint in a shared object thathas not been loaded yet, and second of all there are problems with inlinedfunctions in shared objects. Here are some suggestions to debugging your passwith GDB.

    For sake of discussion, I』m going to assume that you are debugging atransformation invoked by opt, although nothing described heredepends on that.

    Setting a breakpoint in your pass?

    First thing you do is start gdb on the opt process:

    $ gdb optGNU gdb 5.0Copyright 2000 Free Software Foundation, Inc.GDB is free software, covered by the GNU General Public License, and you arewelcome to change it and/or distribute copies of it under certain conditions.Type "show copying" to see the conditions.There is absolutely no warranty for GDB. Type "show warranty" for details.This GDB was configured as "sparc-sun-solaris2.6"...(gdb)

    Note that opt has a lot of debugging information in it, so it takestime to load. Be patient. Since we cannot set a breakpoint in our pass yet(the shared object isn』t loaded until runtime), we must execute the process,and have it stop before it invokes our pass, but after it has loaded the sharedobject. The most foolproof way of doing this is to set a breakpoint inPassManager::run and then run the process with the arguments you want:

    $ (gdb) break llvm::PassManager::runBreakpoint 1 at 0x2413bc: file Pass.cpp, line 70.(gdb) run test.bc -load $(LLVMTOP)/llvm/Debug+Asserts/lib/[libname].so -[passoption]Starting program: opt test.bc -load $(LLVMTOP)/llvm/Debug+Asserts/lib/[libname].so -[passoption]Breakpoint 1, PassManager::run (this=0xffbef174, M=@0x70b298) at Pass.cpp:7070 bool PassManager::run(Module &M) { return PM->run(M); }(gdb)

    Once the opt stops in the PassManager::run method you are nowfree to set breakpoints in your pass so that you can trace through execution ordo other standard debugging stuff.

    Miscellaneous Problems?

    Once you have the basics down, there are a couple of problems that GDB has,some with solutions, some without.

  • Inline functions have bogus stack information. In general, GDB does a prettygood job getting stack traces and stepping through inline functions. When apass is dynamically loaded however, it somehow completely loses thiscapability. The only solution I know of is to de-inline a function (move itfrom the body of a class to a .cpp file).
  • Restarting the program breaks breakpoints. After following the informationabove, you have succeeded in getting some breakpoints planted in your pass.Nex thing you know, you restart the program (i.e., you type 「run」 again),and you start getting errors about breakpoints being unsettable. The onlyway I have found to 「fix」 this problem is to delete the breakpoints that arealready set in your pass, run the program, and re-set the breakpoints onceexecution stops in PassManager::run.
  • Hopefully these tips will help with common case debugging situations. If you』dlike to contribute some tips of your own, just contact Chris.

    Future extensions planned?

    Although the LLVM Pass Infrastructure is very capable as it stands, and doessome nifty stuff, there are things we』d like to add in the future. Here iswhere we are going:

    Multithreaded LLVM?

    Multiple CPU machines are becoming more common and compilation can never befast enough: obviously we should allow for a multithreaded compiler. Becauseof the semantics defined for passes above (specifically they cannot maintainstate across invocations of their run* methods), a nice clean way toimplement a multithreaded compiler would be for the PassManager class tocreate multiple instances of each pass object, and allow the separate instancesto be hacking on different parts of the program at the same time.

    This implementation would prevent each of the passes from having to implementmultithreaded constructs, requiring only the LLVM core to have locking in a fewplaces (for global resources). Although this is a simple extension, we simplyhaven』t had time (or multiprocessor machines, thus a reason) to implement this.Despite that, we have kept the LLVM passes SMP ready, and you should too.


    推薦閱讀:

    libgccjit和LLVM相比,有哪些優點?
    llvm的reg2mem pass做了哪些事情?
    LLVM每日談之四 Pass初探
    有關LLVM(https://github.com/yejinlei/about-compiler)
    編譯時能否關閉clang的所有優化?我試過-O0,但是編譯成彙編之後還是自動進行了一些優化?

    TAG:LLVM |