Despite the fact that Python is not pure-functional programming language, it's multi-paradigm PL and it gives you enough freedom to take credits from functional programming approach. There are theoretical and practical advantages to the functional style:
- Formal provability
- Modularity
- Composability
- Ease of debugging and testing
Fn.py
library provides you with missing "batteries" to get maximum
from functional approach even in mostly-imperative program.
To install fn.py
, simply:
Provide simple syntax for piping values between single-parameter functions.
Usage example:
>>> from fn.monad import Pipe
>>> val = Pipe(10) >> (_ + 10) >> (_ + 5)
>>> print(val.value)
25
>>> val = Pipe(range(10)) >> (filter, _ < 6) >> sum
>>> print(val.value)
15
Wrapper for values that could be an expected type or an error.
Trying to apply further functions will cause them to be applied to values, or skipped for errors.
Usage examples:
>>> from fn.monad import Either
>>> val = Either(10) >> (_ + 10) >> (_ + 5)
>>> print(val)
25
>>> def raiser(x): raise Exception()
>>> val = Either(10) >> raiser >> (_ + 5)
>>> if val.is_error:
>>> print(val.error)
>>> else:
>>> print(val.value)
Exception()
This fork drops support for Python versions older than 3.5 in order to provide better support for type hints. Changes include:
- The
@curried
decorator can now be used on functions with type hints - Curried versions of
map
andfilter
that can be imported fromfn.iters
map_list
andfilter_list
which will convert the output of the map or filter to a listmap_tuple
andfilter_tuple
which will convert the output of the map or filter to a listPipe
andEither
both provide type hints (known issue - the overloaded>>
operator produces incorrect type errors)
from fn import _
from fn.op import zipwith
from itertools import repeat
assert list(map(_ * 2, range(5))) == [0,2,4,6,8]
assert list(filter(_ < 10, [9,10,11])) == [9]
assert list(zipwith(_ + _)([0,1,2], repeat(10))) == [10,11,12]
More examples of using _
you can find in test
cases
declaration (attributes resolving, method calling, slicing).
Attention! If you work in interactive python shell, your should remember that _
means "latest output" and you'll get unpredictable results. In this case, you can do something like from fn import _ as X
(and then write functions like X * 2
).
If you are not sure, what your function is going to do, you can print it:
from fn import _
print (_ + 2) # "(x1) => (x1 + 2)"
print (_ + _ * _) # "(x1, x2, x3) => (x1 + (x2 * x3))"
_
will fail with ArityError
(TypeError
subclass) on inaccurate number of passed arguments. This is one more restrictions to ensure that you did everything right:
>>> from fn import _
>>> (_ + _)(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "fn/underscore.py", line 82, in __call__
raise ArityError(self, self._arity, len(args))
fn.underscore.ArityError: (_ + _) expected 2 arguments, got 1
Attention: Persistent data structures are under active development.
Persistent data structure is a data structure that always preserves the previous version of itself when it is modified (more formal information on Wikipedia). Each operation with such data structure yields a new updated structure instead of in-place modification (all previous versions are potentially available or GC-ed when possible).
Lets take a quick look:
>>> from fn.immutable import SkewHeap
>>> s1 = SkewHeap(10)
>>> s2 = s1.insert(20)
>>> s2
<fn.immutable.heap.SkewHeap object at 0x10b14c050>
>>> s3 = s2.insert(30)
>>> s3
<fn.immutable.heap.SkewHeap object at 0x10b14c158> # <-- other object
>>> s3.extract()
(10, <fn.immutable.heap.SkewHeap object at 0x10b14c050>)
>>> s3.extract() # <-- s3 isn't changed
(10, <fn.immutable.heap.SkewHeap object at 0x10b11c052>)
If you think I'm totally crazy and it will work despairingly slow, just give it 5 minutes. Relax, take a deep breath and read about few techniques that make persistent data structures fast and efficient: structural sharing and path copying.
To see how it works in "pictures", you can check great slides from Zach Allaun's talk (StrangeLoop 2013): "Functional Vectors, Maps And Sets In Julia".
And, if you are brave enough, go and read:
- Chris Okasaki, "Purely Functional Data Structures" (Amazon)
- Fethi Rabhi and Guy Lapalme, "Algorithms: A Functional Programming Approach" (Amazon)
Available immutable data structures in fn.immutable
module:
LinkedList
: most "obvious" persistent data structure, used as building block for other list-based structures (stack, queue)Stack
: wraps linked list implementation with well-known pop/push APIQueue
: uses two linked lists and lazy copy to provide O(1) enqueue and dequeue operationsDeque
(in progress): "Confluently Persistent Deques via Data Structural Bootstrapping"Deque
based onFingerTree
data structure (see more information below)Vector
: O(log32(n)) access to elements by index (which is near-O(1) for reasonable vector size), implementation is based onBitmappedTrie
, almost drop-in replacement for built-in Pythonlist
SkewHeap
: self-adjusting heap implemented as a binary tree with specific branching model, uses heap merge as basic operation, more information - "Self-adjusting heaps"PairingHeap
: "The Pairing-Heap: A New Form of Self-Adjusting Heap"Dict
(in progress): persistent hash map implementation based onBitmappedTrie
FingerTree
(in progress): "Finger Trees: A Simple General-purpose Data Structure"
Use appropriate doc strings to get more information about each data structure as well as sample code.
To get more clear vision of how persistent heaps work (SkewHeap
and PairingHeap
), you can look at slides from my talk "Union-based heaps" (with analyzed data structures definitions in Python and Haskell).
Note. Most functional languages use persistent data structures as basic building blocks, well-known examples are Clojure, Haskell and Scala. Clojure community puts much effort to popularize programming based on the idea of data immutability. There are few amazing talk given by Rich Hickey (creator of Clojure), you can check them to find answers on both questions "How?" and "Why?":
Lazy-evaluated Scala-style streams. Basic idea: evaluate each new
element "on demand" and share calculated elements between all created
iterators. Stream
object supports <<
operator that means pushing
new elements when it's necessary.
Simplest cases:
from fn import Stream
s = Stream() << [1,2,3,4,5]
assert list(s) == [1,2,3,4,5]
assert s[1] == 2
assert list(s[0:2]) == [1,2]
s = Stream() << range(6) << [6,7]
assert list(s) == [0,1,2,3,4,5,6,7]
def gen():
yield 1
yield 2
yield 3
s = Stream() << gen << (4,5)
assert list(s) == [1,2,3,4,5]
Lazy-evaluated stream is useful for infinite sequences, i.e. fibonacci sequence can be calculated as:
from fn import Stream
from fn.iters import take, drop, map
from operator import add
f = Stream()
fib = f << [0, 1] << map(add, f, drop(1, f))
assert list(take(10, fib)) == [0,1,1,2,3,5,8,13,21,34]
assert fib[20] == 6765
assert list(fib[30:35]) == [832040,1346269,2178309,3524578,5702887]
fn.recur.tco
is a workaround for dealing with TCO without heavy stack utilization. Let's start from simple example of recursive factorial calculation:
def fact(n):
if n == 0: return 1
return n * fact(n-1)
This variant works, but it's really ugly. Why? It will utilize memory too heavy cause of recursive storing all previous values to calculate final result. If you will execute this function with big n
(more than sys.getrecursionlimit()
) CPython will fail with
>>> import sys
>>> fact(sys.getrecursionlimit() * 2)
... many many lines of stacktrace ...
RuntimeError: maximum recursion depth exceeded
Which is good, cause it prevents you from terrible mistakes in your code.
How can we optimize this solution? Answer is simple, lets transform function to use tail call:
def fact(n, acc=1):
if n == 0: return acc
return fact(n-1, acc*n)
Why this variant is better? Cause you don't need to remember previous values to calculate final result. More about tail call optimization on Wikipedia. But... Python interpreter will execute this function the same way as previous one, so you won't win anything.
fn.recur.tco
gives you mechanism to write "optimized a bit" tail call recursion (using "trampoline" approach):
from fn import recur
@recur.tco
def fact(n, acc=1):
if n == 0: return False, acc
return True, (n-1, acc*n)
@recur.tco
is a decorator that execute your function in while
loop and check output:
(False, result)
means that we finished(True, args, kwargs)
means that we need to call function again with other arguments(func, args, kwargs)
to switch function to be executed inside while loop
The last variant is really useful, when you need to switch callable inside evaluation loop. Good example for such situation is recursive detection if given number is odd or even:
>>> from fn import recur
>>> @recur.tco
... def even(x):
... if x == 0: return False, True
... return odd, (x-1,)
...
>>> @recur.tco
... def odd(x):
... if x == 0: return False, False
... return even, (x-1,)
...
>>> print even(100000)
True
Attention: be careful with mutable/immutable data structures processing.
fn.uniform
provides you with "unification"
of lazy functionality for few functions to work the same way in Python
2+/3+:
map
(returnsitertools.imap
in Python 2+)filter
(returnsitertools.ifilter
in Python 2+)reduce
(returnsfunctools.reduce
in Python 3+)zip
(returnsitertools.izip
in Python 2+)range
(returnsxrange
in Python 2+)filterfalse
(returnsitertools.ifilterfalse
in Python 2+)zip_longest
(returnsitertools.izip_longest
in Python 2+)accumulate
(backported to Python < 3.3)
fn.iters
is high-level recipes to work with iterators. Most
of them taken from Python
docs
and adopted to work both with Python 2+/3+. Such recipes as drop
,
takelast
, droplast
, splitat
, splitby
I have already
submitted as docs patch which is
review status just now.
take
,drop
takelast
,droplast
head
(alias:first
),tail
(alias:rest
)second
,ffirst
compact
,reject
every
,some
iterate
consume
nth
padnone
,ncycles
repeatfunc
grouper
,powerset
,pairwise
roundrobin
partition
,splitat
,splitby
flatten
iter_except
first_true
More information about use cases you can find in docstrings for each function in source code and in test cases.
fn.F
is a useful function wrapper to provide easy-to-use partial
application and functions composition.
from fn import F, _
from operator import add, mul
# F(f, *args) means partial application
# same as functools.partial but returns fn.F instance
assert F(add, 1)(10) == 11
# F << F means functions composition,
# so (F(f) << g)(x) == f(g(x))
f = F(add, 1) << F(mul, 100)
assert list(map(f, [0, 1, 2])) == [1, 101, 201]
assert list(map(F() << str << (_ ** 2) << (_ + 1), range(3))) == ["1", "4", "9"]
It also give you move readable in many cases "pipe" notation to deal with functions composition:
from fn import F, _
from fn.iters import filter, range
func = F() >> (filter, _ < 6) >> sum
assert func(range(10)) == 15
You can find more examples for compositions usage in fn._
implementation source
code.
fn.op.apply
executes given function with given positional arguments
in list (or any other iterable). fn.op.flip
returns you function
that will reverse arguments order before apply.
from fn.op import apply, flip
from operator import add, sub
assert apply(add, [1, 2]) == 3
assert flip(sub)(20,10) == -10
assert list(map(apply, [add, mul], [(1,2), (10,20)])) == [3, 200]
fn.op.foldl
and fn.op.foldr
are folding operators. Each accepts function with arity 2 and returns function that can be used to reduce iterable to scalar: from left-to-right and from right-to-left in case of foldl
and foldr
respectively.
from fn import op, _
folder = op.foldr(_ * _, 1)
assert 6 == op.foldl(_ + _)([1,2,3])
assert 6 == folder([1,2,3])
Use case specific for right-side folding is:
from fn.op import foldr, call
assert 100 == foldr(call, 0 )([lambda s: s**2, lambda k: k+10])
assert 400 == foldr(call, 10)([lambda s: s**2, lambda k: k+10])
fn.func.curried
is a decorator for building curried functions, for example:
>>> from fn.func import curried
>>> @curried
... def sum5(a, b, c, d, e):
... return a + b + c + d + e
...
>>> sum5(1)(2)(3)(4)(5)
15
>>> sum5(1, 2, 3)(4, 5)
15