Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

basic python docs #332

Merged
merged 2 commits into from
Feb 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 20 additions & 5 deletions source/docs/programming/photonlib/adding-vendordep.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,12 @@ Installing PhotonLib

What is PhotonLib?
------------------
PhotonLib is the vendor dependency that accompanies PhotonVision. We created this vendor dependency to make it easier for teams to retrieve vision data from their integrated vision system.
PhotonLib is the C++ and Java vendor dependency that accompanies PhotonVision. We created this vendor dependency to make it easier for teams to retrieve vision data from their integrated vision system.

Online Install
--------------
PhotonLibPy is a minimal, pure-python implementation of PhotonLib.

Online Install - Java/C++
-------------------------
Click on the WPI icon on the top right of your VS Code window or hit Ctrl+Shift+P (Cmd+Shift+P on macOS) to bring up the command palette. Type, "Manage Vendor Libraries" and select the "WPILib: Manage Vendor Libraries" option. Then, select the "Install new library (online)" option.

.. image:: images/adding-offline-library.png
Expand All @@ -17,6 +19,19 @@ Paste the following URL into the box that pops up:

.. note:: It is recommended to Build Robot Code at least once when connected to the Internet before heading to an area where Internet connectivity is limited (for example, a competition). This ensures that the relevant files are downloaded to your filesystem.

Offline Install
---------------
Offline Install - Java/C++
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Work in progress for 4 years and counting 😄

--------------------------
This installation option is currently a work-in-progress. For now, we recommend users use the online installation method.

Install - Python
----------------
Add photonlibpy to `pyproject.toml`.

.. code-block:: toml

# Other pip packages to install
requires = [
"photonlibpy",
]

See `The WPILib/RobotPy docs <https://docs.wpilib.org/en/stable/docs/software/python/pyproject_toml.html>`_ for more information on using `pyproject.toml.`
57 changes: 56 additions & 1 deletion source/docs/programming/photonlib/getting-target-data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ The ``PhotonCamera`` class has two constructors: one that takes a ``NetworkTable
:language: c++
:lines: 42-43

.. code-block:: python

# Change this to match the name of your camera
self.camera = PhotonCamera("photonvision")


.. warning:: Teams must have unique names for all of their cameras regardless of which coprocessor they are attached to.

Getting the Pipeline Result
Expand All @@ -44,6 +50,13 @@ Use the ``getLatestResult()``/``GetLatestResult()`` (Java and C++ respectively)
:language: c++
:lines: 35-36

.. code-block:: python

# Query the latest result from PhotonVision
result = self.camera.getLatestResult()



.. note:: Unlike other vision software solutions, using the latest result guarantees that all information is from the same timestamp. This is achievable because the PhotonVision backend sends a byte-packed string of data which is then deserialized by PhotonLib to get target data. For more information, check out the `PhotonLib source code <https://github.com/PhotonVision/photonvision/tree/master/photon-lib>`_.


Expand All @@ -63,7 +76,13 @@ Each pipeline result has a ``hasTargets()``/``HasTargets()`` (Java and C++ respe
// Check if the latest result has any targets.
bool hasTargets = result.HasTargets();

.. warning:: You must *always* check if the result has a target via ``hasTargets()``/``HasTargets()`` before getting targets or else you may get a null pointer exception. Further, you must use the same result in every subsequent call in that loop.
.. code-block:: python

// Check if the latest result has any targets.
hasTargets = result.hasTargets()

.. warning:: In Java/C++, You must *always* check if the result has a target via ``hasTargets()``/``HasTargets()`` before getting targets or else you may get a null pointer exception. Further, you must use the same result in every subsequent call in that loop.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder why we never just ended up making this an optional value. Not relevant to the review.

Relevant to the review: does this not need to be checked in python?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well python just returns an empty array. so user still has to deal with it but the behavior is better than null pointer.

I'm not sure Optional was in a ton of use in FRC at the time the API shape was formed.... but probably worthwhile for a summer thing



Getting a List of Targets
-------------------------
Expand All @@ -86,6 +105,10 @@ You can get a list of tracked targets using the ``getTargets()``/``GetTargets()`
// Get a list of currently tracked targets.
wpi::ArrayRef<photonlib::PhotonTrackedTarget> targets = result.GetTargets();

.. code-block:: python

targets = result.getTargets()

Getting the Best Target
-----------------------
You can get the :ref:`best target <docs/reflectiveAndShape/contour-filtering:Contour Grouping and Sorting>` using ``getBestTarget()``/``GetBestTarget()`` (Java and C++ respectively) method from the pipeline result.
Expand All @@ -101,6 +124,12 @@ You can get the :ref:`best target <docs/reflectiveAndShape/contour-filtering:Con
// Get the current best target.
photonlib::PhotonTrackedTarget target = result.GetBestTarget();


.. code-block:: python

# TODO - Not currently supported


Getting Data From A Target
--------------------------
* double ``getYaw()``/``GetYaw()``: The yaw of the target in degrees (positive right).
Expand Down Expand Up @@ -132,6 +161,16 @@ Getting Data From A Target
frc::Transform2d pose = target.GetCameraToTarget();
wpi::SmallVector<std::pair<double, double>, 4> corners = target.GetCorners();

.. code-block:: python

# Get information from target.
yaw = target.getYaw()
pitch = target.getPitch()
area = target.getArea()
skew = target.getSkew()
pose = target.getCameraToTarget()
corners = target.getDetectedCorners()

Getting AprilTag Data From A Target
-----------------------------------
.. note:: All of the data above (**except skew**) is available when using AprilTags.
Expand All @@ -158,6 +197,14 @@ Getting AprilTag Data From A Target
frc::Transform3d bestCameraToTarget = target.getBestCameraToTarget();
frc::Transform3d alternateCameraToTarget = target.getAlternateCameraToTarget();

.. code-block:: python

# Get information from target.
targetID = target.getFiducialId()
poseAmbiguity = target.getPoseAmbiguity()
bestCameraToTarget = target.getBestCameraToTarget()
alternateCameraToTarget = target.getAlternateCameraToTarget()

Saving Pictures to File
-----------------------
A ``PhotonCamera`` can save still images from the input or output video streams to file. This is useful for debugging what a camera is seeing while on the field and confirming targets are being identified properly.
Expand All @@ -182,4 +229,12 @@ Images are stored within the PhotonVision configuration directory. Running the "
// Capture post-process camera stream image
camera.TakeOutputSnapshot();

.. code-block:: python

# Capture pre-process camera stream image
camera.takeInputSnapshot()

# Capture post-process camera stream image
camera.takeOutputSnapshot()

.. note:: Saving images to file takes a bit of time and uses up disk space, so doing it frequently is not recommended. In general, the camera will save an image every 500ms. Calling these methods faster will not result in additional images. Consider tying image captures to a button press on the driver controller, or an appropriate point in an autonomous routine.
Loading