Skip to content

Latest commit

 

History

History
160 lines (111 loc) · 8.61 KB

index2.md

File metadata and controls

160 lines (111 loc) · 8.61 KB
layout
grin-default

Overview

GRIN is a compiler framework and an intermediate representation. It is short for Graph Reduction Intermediate Notation. GRIN could significantly improve the tooling, performance and size of functional programs and could enable functional technologies to target new platforms like WebAssembly.

Functional languages are compiled in three stages:

  1. Language frontend
  2. High-level optimizer (functional)
  3. Low-level optimizer (imperative)

While LLVM handles the last step perfectly, GRIN as a functional optimizer can capture the original language semantics and can perform transformations that are infeasible at LLVM level.

Currently the following language frontends are under development:

  • Haskell
    The Haskell language evolves with the Glasgow Haskell Compiler. GHC development is usually focused on language features and high-level optimization while the machine code generator gets less attention. GHC/GRIN is a combination of GHC's Haskell language frontend and the GRIN optimizer. It is work in progress, check its current status.
  • Idris
    Adding GRIN optimizer to the Idris compiler pipeline will make programs faster and smaller. Idris/GRIN can compile many programs but the runtime is work in progress.
  • Agda
    Plugging the GRIN optimizer after the Agda frontend is on our roadmap but currently Agda/GRIN is only an initial code stub.

GRIN aims to bring the benefits of whole program optimization to a wide range of functional programming languages.

Support the project on Patreon.

Benefits For Programmers

GRIN helps to improve the industrial presence of Haskell.

Tooling

Good tooling is essential for industrial software development. In order to get anywhere near feature parity with the tools of mainstream programming languages, we need to inspect the whole program at the same time. This can help with all stages of development: immediate feedback while typing, visual debugging and profiling. With such runtime tooling it would be possible to show memory structures, debug laziness and visualize unevaluated expressions.
Having access to the whole program could improve the code editor experience also. It would be possible to highlight optimization effects on source code, i.e. dead code/data, linear variable usage, laziness, strictness, tail call, unboxing, stack/heap allocation.
It seems feasible to implement these cool features using Language Server Protocol and GRIN.

Smaller Executables

Whole program analysis helps the compiler to remove dead code and dead data fields more effectively. E.g. it can remove the unused type class instances. This results much smaller executables. It also cuts down the number of referenced external libraries and symbols in the program binary.

Better Performance

Whole program optimization can remove lots of redundant computation, i.e. unnecessary laziness and redundant memory operations. These program simplifications often make other optimizations possible. GRIN represents memory operations and laziness explicitly. This allows aggressive memory layout optimizations, i.e. unboxing, turning heap values to stack/register values. GRIN also eliminates indirect function calls which enables LLVM to perform more optimizations.

New Platforms

GRIN uses LLVM for machine code generation. LLVM provides robust tooling and support for all mainstream platforms. With this design choice the main platforms can be easily supported, i.e. x64, ARM, WebAssembly covering desktop, mobile and web.

Benefits For Researchers

GRIN provides a framework for functional language experimentation.

Analysis Framework

Whole program compilation makes it easy to observe and analyse programs. I.e. researchers can use GHC/GRIN to experiment with real-world functional programs. The GHC/GRIN compiler pipeline can serialize both the STG level and the GRIN level intermediate representation (IR) for the whole program. With this framework it is easy to convert large Haskell programs to a research IR. We also plan to support all GHC primitive operations in the GRIN interpreter and GRIN native code generator.

Related Work

The GRIN Project aims to utilize the most recent results of compiler research, especially pointer analysis and whole program optimization.

Whole program compilers

Program analysis

  • Souffle datalog compiler
    Souffle synthesizes a native parallel C++ program from a logic specification. It is used to implement points-to, control flow and other analyses efficiently.

  • P4F: Pushdown Control-Flow Analysis for Free
    P4F is an advanced control flow analysis. It can boost the optimizer efficiency with providing sophisticated control flow information.

Vectorisation

  • ISPC: Intel SPMD Program Compiler
    Single Program Multiple Data (SPMD) is the programming model used by the GPUs. ISPC implements the SPMD model on CPU SIMD vector instructions like SSE and AVX. It proves that interprocedural data flow vectorisation can be much more performant than loop vectorisation.

  • FLRC: Automatic SIMD Vectorization for Haskell
    Intel Labs Haskell Research compiler utilized a SIMD vectorisation optimization that was specially designed for pure functional languages.

Memory management

  • ASAP Memory Management
    ASAP (As Static As Possible) describes a compile-time automatic memory management system using whole program analysis. It essentially generates a specialized garbage collector for each compiled program. With ASAP it seems possible to run Haskell programs without a run-time garbage collector.

  • Gibbon / Compiling Tree Transforms to Operate on Packed Representations
    Gibbon is a research compiler that experiments with packed memory data representation. It compiles functional programs to work with pointerless data representation which reduces cache misses and improves runtime performance. This technique essentially turns data (de)serialization into raw memory copy.

Support

Please support the project on Patreon.

Ask Us

Gitter chat

Please ask if you have any questions.
(i.e. code, design, research, support, etc.)

Email: [email protected]

FAQ

What is the difference between GHC and GRIN?
GHC is an incremental compiler, therefore it cannot perform whole-program optimization, i.e. optimization across compilation units (with some exceptions at the highest level, e.g. rewrite rules). GRIN is a whole-program optimizer, which goes all the way down to the level of primitive memory access operations.

Why don't you improve GHC instead of GRIN?
Whole-program optimization is a fundamentally different design decision that cannot be easily retrofitted into GHC proper. Instead, we try to reuse as many parts as possible, e.g. the GHC frontend.

Can you reuse the GHC runtime for GHC/GRIN?
No, because the GHC runtime is built for the STG memory model. In contrast, there’s no uniform memory representation in GRIN.