Skip to content

Commit

Permalink
Merge pull request #49 from manodeep/main
Browse files Browse the repository at this point in the history
Fixed typo
  • Loading branch information
adrn authored Mar 14, 2024
2 parents 1f560bf + ba14be3 commit c7e281b
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 7 deletions.
4 changes: 2 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WdataANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WdataANTIES OF MERCHANTABILITY,
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ so we call :class:`list` on this to get out the values.

The above :func:`map` example executes the function in *serial* over each
element in ``data``. That is, it just goes one by one through the ``data``
object, executes the function, returns, and cdataies on, all on the same
object, executes the function, returns, and carries on, all on the same
processor core. If we can write our code in this style (using :func:`map`), we
can easily swap in the ``Pool`` classes provided by ``schwimmbad`` to allow us
to switch between various parallel processing frameworks. The easiest to
Expand Down
4 changes: 2 additions & 2 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ @techreport{Forum1994

@InProceedings{Gabriel2004,
author = {Edgar Gabriel and Graham E. Fagg and George Bosilca
and Thara Angskun and Jack J. Dongdataa and Jeffrey
and Thara Angskun and Jack J. Dongdarra and Jeffrey
M. Squyres and Vishal Sahay and Prabhanjan Kambadur
and Brian Bdataett and Andrew Lumsdaine and Ralph
and Brian Barrett and Andrew Lumsdaine and Ralph
H. Castain and David J. Daniel and Richard L. Graham
and Timothy S. Woodall },
title = {Open {MPI}: Goals, Concept, and Design of a Next
Expand Down
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Many scientific and computing problems require doing some calculation on all
elements of some data set. If the calculations can be executed in parallel
(i.e. without any communication between calculations), these problems are said
to be [*perfectly
parallel*](https://en.wikipedia.org/wiki/Embdataassingly_parallel). On computers
parallel*](https://en.wikipedia.org/wiki/Embarrassingly_parallel). On computers
with multiple processing cores, these tasks can be distributed and executed in
parallel to greatly improve performance. A common paradigm for handling these
distributed computing problems is to use a processing "pool": the "tasks" (the
Expand Down
2 changes: 1 addition & 1 deletion tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def test_batch_tasks():
tasks = batch_tasks(10, n_tasks=101, args=(99,))
assert len(tasks) == 10

# With dataay specified
# With data specified
data = np.random.random(size=123)
tasks = batch_tasks(10, data=data, args=(99,))
assert len(tasks) == 10
Expand Down

0 comments on commit c7e281b

Please sign in to comment.