Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Record resnorm in powm! #196

Open
wants to merge 25 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
3fafd15
Remove unused non-mutating iterable constructors
haampie Dec 6, 2017
cb0cb43
Add missing tests for given initial guess
haampie Dec 6, 2017
c69a85e
Merge pull request #177 from haampie/remove-solve-function
haampie Dec 7, 2017
c81a9ad
Merge pull request #178 from haampie/improve-coverage
haampie Dec 7, 2017
1e3aec2
Remove randomized algorithms from the package
haampie Dec 8, 2017
400bb46
Remove randomized interpolative decomposition
haampie Dec 9, 2017
9383e1d
Merge pull request #180 from haampie/remove-randomized-algorithms
haampie Dec 9, 2017
44b4323
Make convergence history show a summary
haampie Dec 20, 2017
0cb1a5e
Merge pull request #182 from haampie/feature-friendly-convergence-his…
haampie Dec 24, 2017
9654044
Fix mutable structs that are not mutable
haampie Dec 24, 2017
4bbc0d2
Merge pull request #183 from haampie/mutable-struct-to-struct
haampie Dec 24, 2017
4ad1e49
Add broadcasting for dampened type
haampie Dec 25, 2017
7992ed9
Broadcasting over BLAS1
haampie Dec 20, 2017
e894d69
Remove now unused dependency
haampie Dec 25, 2017
7103a27
Merge pull request #184 from haampie/feature-blas-1-as-broadcast
haampie Dec 27, 2017
d97012a
Reduce allocation in inner loop
samuelpowell Jan 4, 2018
b8056b2
Merge pull request #187 from samuelpowell/idrsalloc
haampie Jan 4, 2018
231424c
Resolve conflicting name Hessenberg
haampie Jan 24, 2018
756edef
Merge pull request #189 from haampie/fix-hessenberg-name
haampie Jan 24, 2018
2c75164
Allow pre-allocation and re-use of buffers
Mar 24, 2018
8b3d634
Make CGStateVariables work for all arrays
Mar 24, 2018
7466658
Missed change for CGStateVariables generic array support
Mar 24, 2018
9a5d488
Documenting CGStateVariables
Mar 24, 2018
c1c6ded
Merge pull request #193 from mohamed82008/master
haampie Mar 25, 2018
c33c73f
Record resnorm in powm!
tkf Apr 8, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,4 @@ Pkg.add("IterativeSolvers")

- [Issue #1](https://github.com/JuliaMath/IterativeSolvers.jl/issues/1) documents the implementation roadmap for iterative solvers.

- The interfaces between the various algorithms are still in flux, but aim to be consistent.

- [Issue #33](https://github.com/JuliaMath/IterativeSolvers.jl/issues/33) documents the implementation roadmap for randomized algorithms.
- The interfaces between the various algorithms are still in flux, but aim to be consistent.
1 change: 0 additions & 1 deletion REQUIRE
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
julia 0.6

RecipesBase
SugarBLAS
1 change: 0 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ makedocs(
"Power method" => "eigenproblems/power_method.md",
],
"SVDL" => "svd/svdl.md",
"Randomized algorithms" => "randomized.md",
"The iterator approach" => "iterators.md",
"About" => [
"Contributing" => "about/CONTRIBUTING.md",
Expand Down
8 changes: 1 addition & 7 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ IterativeSolvers.jl is a Julia package that provides efficient iterative algorit

For bug reports, feature requests and questions please submit an issue. If you're interested in contributing, please see the [Contributing](@ref) guide.

For more information on future methods have a look at the package [roadmap](https://github.com/JuliaLang/IterativeSolvers.jl/issues/1) on deterministic methods, for randomized algorithms check [here](https://github.com/JuliaLang/IterativeSolvers.jl/issues/33).
For more information on future methods have a look at the package [roadmap](https://github.com/JuliaLang/IterativeSolvers.jl/issues/1).

## What method should I use for linear systems?

Expand All @@ -29,9 +29,3 @@ When solving **least-squares** problems we currently offer just [LSMR](@ref LSMR
For the Singular Value Decomposition we offer [SVDL](@ref SVDL), which is the Golub-Kahan-Lanczos procedure.

For eigenvalue problems we have at this point just the [Power Method](@ref PowerMethod) and some convenience wrappers to do shift-and-invert.

## Randomized algorithms

[Randomized algorithms](@ref Randomized) have gotten some traction lately. Some of the methods mentioned in [^Halko2011] have been implemented as well, although their quality is generally poor. Also note that many classical methods such as the subspace iteration, BiCG and recent methods like IDR(s) are also "randomized" in some sense.

[^Halko2011]: Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM review 53.2 (2011): 217-288.
35 changes: 0 additions & 35 deletions docs/src/randomized.md

This file was deleted.

18 changes: 5 additions & 13 deletions src/IterativeSolvers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,14 @@ eigensystems, and singular value problems.
"""
module IterativeSolvers

using SugarBLAS

include("common.jl")
include("orthogonalize.jl")
include("history.jl")

#Specialized factorizations
include("factorization.jl")
# Factorizations
include("hessenberg.jl")

#Linear solvers
# Linear solvers
include("stationary.jl")
include("stationary_sparse.jl")
include("cg.jl")
Expand All @@ -25,19 +22,14 @@ include("gmres.jl")
include("chebyshev.jl")
include("idrs.jl")

#Eigensolvers
# Eigensolvers
include("simple.jl")

#SVD solvers
# SVD solvers
include("svdl.jl")

#Least-squares
# Least-squares
include("lsqr.jl")
include("lsmr.jl")

#Randomized algorithms
include("rlinalg.jl")
include("rsvd.jl")
include("rsvd_fnkz.jl")

end
17 changes: 4 additions & 13 deletions src/bicgstabl.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ mutable struct BiCGStabIterable{precT, matT, solT, vecT <: AbstractVector, small
M::smallMatT
end

bicgstabl_iterator(A, b, l; kwargs...) = bicgstabl_iterator!(zerox(A, b), A, b, l; initial_zero = true, kwargs...)

function bicgstabl_iterator!(x, A, b, l::Int = 2;
Pl = Identity(),
max_mv_products = size(A, 2),
Expand All @@ -49,8 +47,7 @@ function bicgstabl_iterator!(x, A, b, l::Int = 2;
copy!(residual, b)
else
A_mul_B!(residual, A, x)
@blas! residual -= one(T) * b
@blas! residual *= -one(T)
residual .= b .- residual
mv_products += 1
end

Expand Down Expand Up @@ -91,10 +88,7 @@ function next(it::BiCGStabIterable, iteration::Int)
β = ρ / it.σ

# us[:, 1 : j] .= rs[:, 1 : j] - β * us[:, 1 : j]
for i = 1 : j
@blas! view(it.us, :, i) *= -β
@blas! view(it.us, :, i) += one(T) * view(it.rs, :, i)
end
it.us[:, 1 : j] .= it.rs[:, 1 : j] .- β .* it.us[:, 1 : j]

# us[:, j + 1] = Pl \ (A * us[:, j])
next_u = view(it.us, :, j + 1)
Expand All @@ -104,18 +98,15 @@ function next(it::BiCGStabIterable, iteration::Int)
it.σ = dot(it.r_shadow, next_u)
α = ρ / it.σ

# rs[:, 1 : j] .= rs[:, 1 : j] - α * us[:, 2 : j + 1]
for i = 1 : j
@blas! view(it.rs, :, i) -= α * view(it.us, :, i + 1)
end
it.rs[:, 1 : j] .-= α .* it.us[:, 2 : j + 1]

# rs[:, j + 1] = Pl \ (A * rs[:, j])
next_r = view(it.rs, :, j + 1)
A_mul_B!(next_r, it.A , view(it.rs, :, j))
A_ldiv_B!(it.Pl, next_r)

# x = x + α * us[:, 1]
@blas! it.x += α * view(it.us, :, 1)
it.x .+= α .* view(it.us, :, 1)
end

# Bookkeeping
Expand Down
51 changes: 34 additions & 17 deletions src/cg.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import Base: start, next, done

export cg, cg!, CGIterable, PCGIterable, cg_iterator, cg_iterator!
export cg, cg!, CGIterable, PCGIterable, cg_iterator!, CGStateVariables

mutable struct CGIterable{matT, solT, vecT, numT <: Real}
A::matT
Expand Down Expand Up @@ -43,16 +43,15 @@ end
function next(it::CGIterable, iteration::Int)
# u := r + βu (almost an axpy)
β = it.residual^2 / it.prev_residual^2
@blas! it.u *= β
@blas! it.u += one(eltype(it.u)) * it.r
it.u .= it.r .+ β .* it.u

# c = A * u
A_mul_B!(it.c, it.A, it.u)
α = it.residual^2 / dot(it.u, it.c)

# Improve solution and residual
@blas! it.x += α * it.u
@blas! it.r -= α * it.c
it.x .+= α .* it.u
it.r .-= α .* it.c

it.prev_residual = it.residual
it.residual = norm(it.r)
Expand All @@ -73,16 +72,15 @@ function next(it::PCGIterable, iteration::Int)

# u := c + βu (almost an axpy)
β = it.ρ / ρ_prev
@blas! it.u *= β
@blas! it.u += one(eltype(it.u)) * it.c
it.u .= it.c .+ β .* it.u

# c = A * u
A_mul_B!(it.c, it.A, it.u)
α = it.ρ / dot(it.u, it.c)

# Improve solution and residual
@blas! it.x += α * it.u
@blas! it.r -= α * it.c
it.x .+= α .* it.u
it.r .-= α .* it.c

it.residual = norm(it.r)

Expand All @@ -92,15 +90,32 @@ end

# Utility functions

@inline cg_iterator(A, b, Pl = Identity(); kwargs...) = cg_iterator!(zerox(A, b), A, b, Pl; initially_zero = true, kwargs...)
"""
Intermediate CG state variables to be used inside cg and cg!. `u`, `r` and `c` should be of the same type as the solution of `cg` or `cg!`.
```
struct CGStateVariables{T,Tx<:AbstractArray{T}}
u::Tx
r::Tx
c::Tx
end
```
"""
struct CGStateVariables{T,Tx<:AbstractArray{T}}
u::Tx
r::Tx
c::Tx
end

function cg_iterator!(x, A, b, Pl = Identity();
tol = sqrt(eps(real(eltype(b)))),
maxiter::Int = size(A, 2),
statevars::CGStateVariables = CGStateVariables{eltype(x),typeof(x)}(zeros(x), similar(x), similar(x)),
initially_zero::Bool = false
)
u = zeros(x)
r = similar(x)
u = statevars.u
r = statevars.r
c = statevars.c
u .= zero(eltype(x))
copy!(r, b)

# Compute r with an MV-product or not.
Expand All @@ -111,8 +126,8 @@ function cg_iterator!(x, A, b, Pl = Identity();
reltol = residual * tol # Save one dot product
else
mv_products = 1
c = A * x
@blas! r -= one(eltype(x)) * c
A_mul_B!(c, A, x)
r .-= c
residual = norm(r)
reltol = norm(b) * tol
end
Expand Down Expand Up @@ -149,15 +164,16 @@ cg(A, b; kwargs...) = cg!(zerox(A, b), A, b; initially_zero = true, kwargs...)

## Keywords

- `statevars::CGStateVariables`: Has 3 arrays similar to `x` to hold intermediate results;
- `initially_zero::Bool`: If `true` assumes that `iszero(x)` so that one
matrix-vector product can be saved when computing the initial
residual vector;
- `Pl = Identity()`: left preconditioner of the method. Should be symmetric,
positive-definite like `A`.
positive-definite like `A`;
- `tol::Real = sqrt(eps(real(eltype(b))))`: tolerance for stopping condition `|r_k| / |r_0| ≤ tol`;
- `maxiter::Int = size(A,2)`: maximum number of iterations;
- `verbose::Bool = false`: print method information;
- `log::Bool = false`: keep track of the residual norm in each iteration;
- `log::Bool = false`: keep track of the residual norm in each iteration.

# Output

Expand All @@ -179,6 +195,7 @@ function cg!(x, A, b;
tol = sqrt(eps(real(eltype(b)))),
maxiter::Int = size(A, 2),
log::Bool = false,
statevars::CGStateVariables = CGStateVariables{eltype(x), typeof(x)}(zeros(x), similar(x), similar(x)),
verbose::Bool = false,
Pl = Identity(),
kwargs...
Expand All @@ -188,7 +205,7 @@ function cg!(x, A, b;
log && reserve!(history, :resnorm, maxiter + 1)

# Actually perform CG
iterable = cg_iterator!(x, A, b, Pl; tol = tol, maxiter = maxiter, kwargs...)
iterable = cg_iterator!(x, A, b, Pl; tol = tol, maxiter = maxiter, statevars = statevars, kwargs...)
if log
history.mvps = iterable.mv_products
end
Expand Down
14 changes: 4 additions & 10 deletions src/chebyshev.jl
Original file line number Diff line number Diff line change
Expand Up @@ -37,26 +37,20 @@ function next(cheb::ChebyshevIterable, iteration::Int)
else
β = (cheb.λ_diff * cheb.α / 2) ^ 2
cheb.α = inv(cheb.λ_avg - β)

# Almost an axpy u = c + βu
scale!(cheb.u, β)
@blas! cheb.u += one(T) * cheb.c
cheb.u .= cheb.c .+ β .* cheb.c
end

A_mul_B!(cheb.c, cheb.A, cheb.u)
cheb.mv_products += 1

@blas! cheb.x += cheb.α * cheb.u
@blas! cheb.r -= cheb.α * cheb.c
cheb.x .+= cheb.α .* cheb.u
cheb.r .-= cheb.α .* cheb.c

cheb.resnorm = norm(cheb.r)

cheb.resnorm, iteration + 1
end

chebyshev_iterable(A, b, λmin::Real, λmax::Real; kwargs...) =
chebyshev_iterable!(zerox(A, b), A, b, λmin, λmax; kwargs...)

function chebyshev_iterable!(x, A, b, λmin::Real, λmax::Real;
tol = sqrt(eps(real(eltype(b)))), maxiter = size(A, 2), Pl = Identity(), initially_zero = false)

Expand All @@ -76,7 +70,7 @@ function chebyshev_iterable!(x, A, b, λmin::Real, λmax::Real;
mv_products = 0
else
A_mul_B!(c, A, x)
@blas! r -= one(T) * c
r .-= c
resnorm = norm(r)
reltol = tol * norm(b)
mv_products = 1
Expand Down
Loading