Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add enableUndistortion flag to ImgFrameCapabilities #1190

Open
wants to merge 91 commits into
base: v3_develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
91 commits
Select commit Hold shift + click to select a range
06a8c50
Fix ImgTransformations docs
asahtik Nov 12, 2024
ccac782
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 19, 2024
85ea99d
Merge branch 'v3_image_manip_v2_size_fix' of github.com:luxonis/depth…
asahtik Nov 20, 2024
dbca122
ImgTransformations fixes & improvements
asahtik Nov 20, 2024
ff149e6
Bump rvc2 fw
asahtik Nov 20, 2024
3945716
Added stereodepth example
asahtik Nov 21, 2024
e994d49
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 21, 2024
29e523e
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 25, 2024
6e76aed
Bump rvc2 firmware
asahtik Nov 27, 2024
6b05464
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 3, 2024
7e70e81
Do not align depth in SpatialDetectionNetwork
asahtik Dec 5, 2024
74fc947
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 6, 2024
f66b464
Make aligned images sort of work
asahtik Dec 6, 2024
d59e370
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 9, 2024
1a322b5
Bump rvc2 fw
asahtik Dec 9, 2024
df389c3
ImgTransformations ImageAlign improvements
asahtik Dec 9, 2024
2718bbb
Added some setters to ImgTransformations
asahtik Dec 9, 2024
9725f6e
Bump rvc4 fw
asahtik Dec 9, 2024
bd610ec
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 10, 2024
276199a
Add examples
asahtik Dec 10, 2024
aab044f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 10, 2024
4b8acf9
Bump rvc2 fw
asahtik Dec 10, 2024
7f3ae78
Add enableUndistortion flag to ImgFrameCapabilities
SzabolcsGergely Dec 10, 2024
f19c0a0
RVC2: Update FW
SzabolcsGergely Dec 10, 2024
daa6607
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 11, 2024
54efee8
Bump rvc2 fw
asahtik Dec 11, 2024
6ed3769
Bump rvc4 fw
asahtik Dec 11, 2024
de50194
Re-add depth align to spatial detection network
asahtik Dec 11, 2024
b4f0772
Clamp remapped rect in SpatialDetectionNetwork
asahtik Dec 11, 2024
9da98f4
RVC2: Update FW, change letterboxing behavior
SzabolcsGergely Dec 11, 2024
48b2db5
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 12, 2024
3fa2ba2
ImgTransformations fixes for aligned depth
asahtik Dec 12, 2024
bdf2393
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 13, 2024
884cc06
Bump rvc2 fw with merge [no ci]
asahtik Dec 13, 2024
23b3011
Bump rvc4 fw with merge
asahtik Dec 13, 2024
8d3db28
FW: fix video encoder crash
asahtik Dec 13, 2024
dfbccf6
FW: fix SpatialDetectionNetwork roi error
asahtik Dec 13, 2024
47604c5
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 14, 2024
8cec2b8
FW: Fix transformationdata in aligned stereo depth images
asahtik Dec 16, 2024
8f25dfe
FW: Add sanity check when copying transformation data
asahtik Dec 17, 2024
3c8373c
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 17, 2024
ed0adc1
Add missing bindings
Dec 17, 2024
4ca93ad
Update the depth_align.py example and bump RVC2 FW with fixed distort…
Dec 17, 2024
5d5099c
Merge branch 'rvc2_img_transformations_v2' of github.com:luxonis/dept…
asahtik Dec 18, 2024
607f91f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 18, 2024
461c9e8
Fix ImageManipV2 crash
asahtik Dec 18, 2024
824bbae
Bump rvc2 fw
asahtik Dec 18, 2024
741a6a8
Bump rvc4 fw
asahtik Dec 18, 2024
848bd85
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 24, 2024
80793f6
FW: hopefully fix videnc crash
asahtik Dec 24, 2024
26ba7b9
FW: new attempt at fixing videnc crash on RVC2
asahtik Dec 24, 2024
ce2ad64
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 6, 2025
9ec8927
Fix encoded_frame_tests [no ci]
asahtik Jan 6, 2025
223ef97
Bump rvc4 fw
asahtik Jan 6, 2025
06b74c2
FW: Add missing header
asahtik Jan 6, 2025
3a6a3b6
FW: Fix imagemanipv2 issue on RVC2
asahtik Jan 6, 2025
025e925
Fix imagemanipv2 issue
asahtik Jan 6, 2025
04c1f65
Fix build issue [no ci]
asahtik Jan 6, 2025
17125f9
Bump rvc4 fw
asahtik Jan 6, 2025
49e401b
Add test
asahtik Jan 7, 2025
46f478f
Colorize frame more generically
Jan 7, 2025
685ed68
Merge remote-tracking branch 'origin/v3_develop' into HEAD
SzabolcsGergely Jan 7, 2025
7278810
Update FW: Merge with latest v3_develop
SzabolcsGergely Jan 7, 2025
cc49040
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 8, 2025
9c8a87d
Bump rvc2 fw
asahtik Jan 8, 2025
6334678
Update RVC4 FW
SzabolcsGergely Jan 8, 2025
0ebfe9a
Add example for transformationdata [no ci]
asahtik Jan 9, 2025
fe7061a
FW: PR fixes
asahtik Jan 9, 2025
a149948
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 9, 2025
bb0d85f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 9, 2025
289b513
Bump rvc4 fw
asahtik Jan 9, 2025
b55d161
Merge remote-tracking branch 'origin/rvc2_img_transformations_v2' int…
SzabolcsGergely Jan 9, 2025
19c0394
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 10, 2025
8db82bb
FW: handle missing calib when setting initial transformation
asahtik Jan 10, 2025
5cc19b1
Fix example
asahtik Jan 10, 2025
6fc514b
FW: fix crash when using Cono/ColorCamera
asahtik Jan 10, 2025
074361b
ImageManipV2: Add undistort OP
SzabolcsGergely Jan 12, 2025
4bc07f9
RVC2: Update FW
SzabolcsGergely Jan 12, 2025
39bcc50
Move enable undistortion to std::optional
SzabolcsGergely Jan 12, 2025
5e46fdd
Update RVC2 FW
SzabolcsGergely Jan 12, 2025
213b2b4
Update requestOutput bindings
SzabolcsGergely Jan 12, 2025
06b368a
Update RVC4
SzabolcsGergely Jan 12, 2025
d935383
RVC2: Add Camera undistortion example
SzabolcsGergely Jan 12, 2025
926a576
Merge remote-tracking branch 'origin/rvc2_img_transformations_v2' int…
SzabolcsGergely Jan 13, 2025
5ab530e
Fix example crash on rvc4, bump rvc4 fw
asahtik Jan 13, 2025
5c4f8fb
Merge remote-tracking branch 'origin/v3_develop' into HEAD
SzabolcsGergely Jan 15, 2025
1065d3a
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 15, 2025
877de5e
Remove Undistort OP; add undistort flag
SzabolcsGergely Jan 15, 2025
e88d70f
Merge remote-tracking branch 'origin/rvc2_img_transformations_v2' int…
SzabolcsGergely Jan 15, 2025
175bbc8
Remove ImageManip Undistort constructor
SzabolcsGergely Jan 15, 2025
8d60317
Update FW
SzabolcsGergely Jan 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ void ImgFrameCapabilityBindings::bind(pybind11::module& m, void* pCallstack) {
.def_readwrite("fps", &ImgFrameCapability::fps)
.def_readwrite("type", &ImgFrameCapability::type)
.def_readwrite("resizeMode", &ImgFrameCapability::resizeMode)
.def_readwrite("enableUndistortion", &ImgFrameCapability::enableUndistortion)

;
}
12 changes: 12 additions & 0 deletions bindings/python/src/pipeline/datatype/ImgFrameBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,20 @@ void bind_imgframe(pybind11::module& m, void* pCallstack) {
.def("getSourceIntrinsicMatrixInv", &ImgTransformation::getSourceIntrinsicMatrixInv)
.def("getIntrinsicMatrix", &ImgTransformation::getIntrinsicMatrix)
.def("getIntrinsicMatrixInv", &ImgTransformation::getIntrinsicMatrixInv)
.def("getDistortionModel", &ImgTransformation::getDistortionModel, DOC(dai, ImgTransformation, getDistortionModel))
.def("getDistortionCoefficients", &ImgTransformation::getDistortionCoefficients, DOC(dai, ImgTransformation, getDistortionCoefficients))
.def("getSrcCrops", &ImgTransformation::getSrcCrops, DOC(dai, ImgTransformation, getSrcCrops))
.def("getSrcMaskPt", &ImgTransformation::getSrcMaskPt, py::arg("x"), py::arg("y"), DOC(dai, ImgTransformation, getSrcMaskPt))
.def("getDstMaskPt", &ImgTransformation::getDstMaskPt, py::arg("x"), py::arg("y"), DOC(dai, ImgTransformation, getDstMaskPt))
.def("getDFov", &ImgTransformation::getDFov, py::arg("source") = false)
.def("getHFov", &ImgTransformation::getHFov, py::arg("source") = false)
.def("getVFov", &ImgTransformation::getVFov, py::arg("source") = false)
.def("setIntrinsicMatrix", &ImgTransformation::setIntrinsicMatrix, py::arg("intrinsicMatrix"), DOC(dai, ImgTransformation, setIntrinsicMatrix))
.def("setDistortionModel", &ImgTransformation::setDistortionModel, py::arg("model"), DOC(dai, ImgTransformation, setDistortionModel))
.def("setDistortionCoefficients",
&ImgTransformation::setDistortionCoefficients,
py::arg("coefficients"),
DOC(dai, ImgTransformation, setDistortionCoefficients))
.def("addTransformation", &ImgTransformation::addTransformation, py::arg("matrix"), DOC(dai, ImgTransformation, addTransformation))
.def("addCrop", &ImgTransformation::addCrop, py::arg("x"), py::arg("y"), py::arg("width"), py::arg("height"), DOC(dai, ImgTransformation, addCrop))
.def("addPadding",
Expand Down Expand Up @@ -201,6 +212,7 @@ void bind_imgframe(pybind11::module& m, void* pCallstack) {
.def("getSourceWidth", &ImgFrame::getSourceWidth, DOC(dai, ImgFrame, getSourceWidth))
.def("getSourceHeight", &ImgFrame::getSourceHeight, DOC(dai, ImgFrame, getSourceHeight))
.def("getTransformation", [](ImgFrame& msg) {return msg.transformation;})
.def("validateTransformations", &ImgFrame::validateTransformations, DOC(dai, ImgFrame, validateTransformations))

#ifdef DEPTHAI_HAVE_OPENCV_SUPPORT
// The cast function itself does a copy, so we can avoid two copies by always not copying
Expand Down
3 changes: 2 additions & 1 deletion bindings/python/src/pipeline/node/CameraBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,12 @@ void bind_camera(pybind11::module& m, void* pCallstack) {
// .def("setCamera", &Camera::setCamera, "name"_a, DOC(dai, node, Camera, setCamera))
// .def("getCamera", &Camera::getCamera, DOC(dai, node, Camera, getCamera))
.def("requestOutput",
py::overload_cast<std::pair<uint32_t, uint32_t>, std::optional<ImgFrame::Type>, ImgResizeMode, float>(&Camera::requestOutput),
py::overload_cast<std::pair<uint32_t, uint32_t>, std::optional<ImgFrame::Type>, ImgResizeMode, float, std::optional<bool>>(&Camera::requestOutput),
"size"_a,
"type"_a = std::nullopt,
"resizeMode"_a = dai::ImgResizeMode::CROP,
"fps"_a = 30,
"enableUndistortion"_a = std::nullopt,
py::return_value_policy::reference_internal,
DOC(dai, node, Camera, requestOutput))
.def("requestOutput",
Expand Down
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceRVC4Config.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ set(DEPTHAI_DEVICE_RVC4_MATURITY "snapshot")

# "version if applicable"
# set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+93f7b75a885aa32f44c5e9f53b74470c49d2b1af")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+670797e3b8cbc185c7d457e24382495200486573")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+84c5c0523a6da38977bed5703d52cf4f1474ff8b")
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceSideConfig.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
set(DEPTHAI_DEVICE_SIDE_MATURITY "snapshot")

# "full commit hash of device side binary"
set(DEPTHAI_DEVICE_SIDE_COMMIT "fa759809823266407a39cd7cc77d98175234a7f6")
set(DEPTHAI_DEVICE_SIDE_COMMIT "108b7c0bd0a963401dedf929417eeee5acd4bdb2")

# "version if applicable"
set(DEPTHAI_DEVICE_SIDE_VERSION "")
3 changes: 3 additions & 0 deletions examples/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -273,6 +273,9 @@ dai_add_example(record_imu RVC2/Record/record_imu.cpp OFF OFF)
dai_add_example(replay_video_meta RVC2/Replay/replay_video_meta.cpp OFF OFF)
dai_add_example(replay_imu RVC2/Replay/replay_imu.cpp OFF OFF)

# StereoDepth
dai_add_example(depth_preview StereoDepth/depth_preview.cpp OFF OFF)

# Thermal
dai_add_example(thermal RVC2/Thermal/thermal.cpp OFF OFF)

Expand Down
8 changes: 3 additions & 5 deletions examples/cpp/ImageManip/image_manip_v2_mod.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,10 @@ int main(int argc, char** argv) {
}
dai::Pipeline pipeline(device);

auto camRgb = pipeline.create<dai::node::ColorCamera>();
auto camRgb = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
auto display = pipeline.create<dai::node::Display>();
auto manip = pipeline.create<dai::node::ImageManipV2>();

camRgb->setBoardSocket(dai::CameraBoardSocket::CAM_A);
camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);

manip->setMaxOutputFrameSize(4000000);
manip->initialConfig.setOutputSize(1280, 720, dai::ImageManipConfigV2::ResizeMode::LETTERBOX);
manip->initialConfig.setBackgroundColor(100, 100, 100);
Expand All @@ -29,7 +26,8 @@ int main(int argc, char** argv) {
manip->initialConfig.addFlipVertical();
manip->initialConfig.setFrameType(dai::ImgFrame::Type::RGB888p);

camRgb->video.link(manip->inputImage);
auto* rgbOut = camRgb->requestOutput({1920, 1080});
rgbOut->link(manip->inputImage);
manip->out.link(display->input);

pipeline.start();
Expand Down
65 changes: 65 additions & 0 deletions examples/cpp/StereoDepth/depth_preview.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#include <iostream>

// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
#include "depthai/pipeline/datatype/StereoDepthConfig.hpp"
#include "depthai/pipeline/node/StereoDepth.hpp"
#include "depthai/properties/StereoDepthProperties.hpp"

// Closer-in minimum depth, disparity range is doubled (from 95 to 190):
static std::atomic<bool> extended_disparity{false};
// Better accuracy for longer distance, fractional disparity 32-levels:
static std::atomic<bool> subpixel{false};
// Better handling for occlusions:
static std::atomic<bool> lr_check{true};

int main() {
// Create pipeline
dai::Pipeline pipeline;

// Define sources and outputs
auto monoLeft = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
auto monoRight = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
auto depth = pipeline.create<dai::node::StereoDepth>();

// Properties
auto* lout = monoLeft->requestOutput({640, 400});
auto* rout = monoRight->requestOutput({640, 400});

// Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth->build(*lout, *rout, dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
// Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
depth->initialConfig.setMedianFilter(dai::StereoDepthConfig::MedianFilter::KERNEL_7x7);
depth->setLeftRightCheck(lr_check);
depth->setExtendedDisparity(extended_disparity);
depth->setSubpixel(subpixel);

// Output queue will be used to get the disparity frames from the outputs defined above
auto q = depth->disparity.createOutputQueue();
auto qleft = lout->createOutputQueue();

pipeline.start();

while(true) {
auto inDepth = q->get<dai::ImgFrame>();
auto inLeft = qleft->get<dai::ImgFrame>();
auto frame = inDepth->getFrame();
// Normalization for better visualization
frame.convertTo(frame, CV_8UC1, 255 / depth->initialConfig.getMaxDisparity());

std::cout << "Left type: " << inLeft->fb.str() << std::endl;

cv::imshow("disparity", frame);

// Available color maps: https://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html
cv::applyColorMap(frame, frame, cv::COLORMAP_JET);
cv::imshow("disparity_color", frame);

int key = cv::waitKey(1);
if(key == 'q' || key == 'Q') {
break;
}
}
pipeline.stop();
return 0;
}
5 changes: 5 additions & 0 deletions examples/python/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,8 @@ dai_set_example_test_labels(april_tags_replay ondevice rvc2_all rvc4 ci)
## Detection network
add_python_example(detection_network DetectionNetwork/detection_network.py)
dai_set_example_test_labels(detection_network ondevice rvc2_all rvc4 ci)
add_python_example(detection_network_remap DetectionNetwork/detection_network_remap.py)
dai_set_example_test_labels(detection_network ondevice rvc2_all rvc4 ci)

## Host nodes
add_python_example(display HostNodes/display.py)
Expand All @@ -162,6 +164,9 @@ dai_set_example_test_labels(image_manip_mod ondevice rvc2_all rvc4 ci)
add_python_example(image_manip_resize ImageManip/image_manip_v2_resize.py)
dai_set_example_test_labels(image_manip_resize ondevice rvc2_all rvc4 ci)

add_python_example(image_manip_remap ImageManip/image_manip_v2_remap.py)
dai_set_example_test_labels(image_manip_remap ondevice rvc2_all rvc4 ci)

## Misc
add_python_example(reconnect_callback Misc/AutoReconnect/reconnect_callback.py)
dai_set_example_test_labels(reconnect_callback ondevice rvc2_all rvc4 ci)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,33 @@
import depthai as dai
import numpy as np

def colorizeDepth(frameDepth):
invalidMask = frameDepth == 0
# Log the depth, minDepth and maxDepth
try:
minDepth = np.percentile(frameDepth[frameDepth != 0], 3)
maxDepth = np.percentile(frameDepth[frameDepth != 0], 95)
logDepth = np.log(frameDepth, where=frameDepth != 0)
logMinDepth = np.log(minDepth)
logMaxDepth = np.log(maxDepth)
np.nan_to_num(logDepth, copy=False, nan=logMinDepth)
# Clip the values to be in the 0-255 range
logDepth = np.clip(logDepth, logMinDepth, logMaxDepth)

# Interpolate only valid logDepth values, setting the rest based on the mask
depthFrameColor = np.interp(logDepth, (logMinDepth, logMaxDepth), (0, 255))
depthFrameColor = np.nan_to_num(depthFrameColor)
depthFrameColor = depthFrameColor.astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_JET)
# Set invalid depth pixels to black
depthFrameColor[invalidMask] = 0
except IndexError:
# Frame is likely empty
depthFrameColor = np.zeros((frameDepth.shape[0], frameDepth.shape[1], 3), dtype=np.uint8)
except Exception as e:
raise e
return depthFrameColor

# Create pipeline
with dai.Pipeline() as pipeline:
cameraNode = pipeline.create(dai.node.Camera).build()
Expand Down Expand Up @@ -35,8 +62,7 @@ def displayFrame(name: str, frame: dai.ImgFrame, imgDetections: dai.ImgDetection
assert imgDetections.getTransformation() is not None
cvFrame = frame.getFrame() if frame.getType() == dai.ImgFrame.Type.RAW16 else frame.getCvFrame()
if(frame.getType() == dai.ImgFrame.Type.RAW16):
cvFrame = (cvFrame * (255 / stereo.initialConfig.getMaxDisparity())).astype(np.uint8)
cvFrame = cv2.applyColorMap(cvFrame, cv2.COLORMAP_JET)
cvFrame = colorizeDepth(cvFrame)
for detection in imgDetections.detections:
# Get the shape of the frame from which the detections originated for denormalization
normShape = imgDetections.getTransformation().getSize()
Expand Down
19 changes: 3 additions & 16 deletions examples/python/ImageAlign/depth_align.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,7 @@ def getFps(self):
return 0
return (len(self.frameTimes) - 1) / (self.frameTimes[-1] - self.frameTimes[0])

device = dai.Device()

calibrationHandler = device.readCalibration()
rgbDistortion = calibrationHandler.getDistortionCoefficients(RGB_SOCKET)
distortionModel = calibrationHandler.getDistortionModel(RGB_SOCKET)
if distortionModel != dai.CameraModel.Perspective:
raise RuntimeError("Unsupported distortion model for RGB camera. This example supports only Perspective model.")

pipeline = dai.Pipeline(device)
pipeline = dai.Pipeline()

platform = pipeline.getDefaultDevice().getPlatform()

Expand All @@ -47,7 +39,6 @@ def getFps(self):
align = pipeline.create(dai.node.ImageAlign)

stereo.setExtendedDisparity(True)

sync.setSyncThreshold(timedelta(seconds=1/(2*FPS)))

rgbOut = camRgb.requestOutput(size = (1280, 960), fps = FPS)
Expand Down Expand Up @@ -142,14 +133,10 @@ def updateBlendWeights(percentRgb):
# Blend when both received
if frameDepth is not None:
cvFrame = frameRgb.getCvFrame()

# Undistort the rgb frame
rgbIntrinsics = calibrationHandler.getCameraIntrinsics(RGB_SOCKET, int(cvFrame.shape[1]), int(cvFrame.shape[0]))

cvFrameUndistorted = cv2.undistort(
cvFrame,
np.array(rgbIntrinsics),
np.array(rgbDistortion),
np.array(frameRgb.getTransformation().getIntrinsicMatrix()),
np.array(frameRgb.getTransformation().getDistortionCoefficients()),
)
# Colorize the aligned depth
alignedDepthColorized = colorizeDepth(frameDepth.getFrame())
Expand Down
70 changes: 70 additions & 0 deletions examples/python/ImageManip/image_manip_v2_remap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import depthai as dai
import cv2
import numpy as np

def draw_rotated_rectangle(frame, center, size, angle, color, thickness=2):
"""
Draws a rotated rectangle on the given frame.

Args:
frame (numpy.ndarray): The image/frame to draw on.
center (tuple): The (x, y) coordinates of the rectangle's center.
size (tuple): The (width, height) of the rectangle.
angle (float): The rotation angle of the rectangle in degrees (counter-clockwise).
color (tuple): The color of the rectangle in BGR format (e.g., (0, 255, 0) for green).
thickness (int): The thickness of the rectangle edges. Default is 2.
"""
# Create a rotated rectangle
rect = ((center[0], center[1]), (size[0], size[1]), angle)

# Get the four vertices of the rotated rectangle
box = cv2.boxPoints(rect)
box = np.intp(box) # Convert to integer coordinates

# Draw the rectangle on the frame
cv2.polylines(frame, [box], isClosed=True, color=color, thickness=thickness)

with dai.Pipeline() as pipeline:
cam = pipeline.create(dai.node.Camera).build()
camOut = cam.requestOutput((640, 400), dai.ImgFrame.Type.BGR888i, fps = 30.0)
manip1 = pipeline.create(dai.node.ImageManipV2)
manip2 = pipeline.create(dai.node.ImageManipV2)

camOut.link(manip1.inputImage)
manip1.out.link(manip2.inputImage)

manip1.initialConfig.addRotateDeg(90)
manip1.initialConfig.setOutputSize(200, 320)

manip2.initialConfig.addRotateDeg(90)
manip2.initialConfig.setOutputSize(320, 200)
manip2.setRunOnHost(True)

outQcam = camOut.createOutputQueue()
outQ1 = manip1.out.createOutputQueue()
outQ2 = manip2.out.createOutputQueue()

pipeline.start()

while True:
camFrame: dai.ImgFrame = outQcam.get()
manip1Frame: dai.ImgFrame = outQ1.get()
manip2Frame: dai.ImgFrame = outQ2.get()

camCv = camFrame.getCvFrame()
manip1Cv = manip1Frame.getCvFrame()
manip2Cv = manip2Frame.getCvFrame()

rect2 = dai.RotatedRect(dai.Rect(dai.Point2f(100, 100), dai.Point2f(200, 150)), 0)
rect1 = manip2Frame.getTransformation().remapRectTo(manip1Frame.getTransformation(), rect2)
rectcam = manip1Frame.getTransformation().remapRectTo(camFrame.getTransformation(), rect1)

draw_rotated_rectangle(manip2Cv, (rect2.center.x, rect2.center.y), (rect2.size.width, rect2.size.height), rect2.angle, (255, 0, 0))
draw_rotated_rectangle(manip1Cv, (rect1.center.x, rect1.center.y), (rect1.size.width, rect1.size.height), rect1.angle, (255, 0, 0))
draw_rotated_rectangle(camCv, (rectcam.center.x, rectcam.center.y), (rectcam.size.width, rectcam.size.height), rectcam.angle, (255, 0, 0))

cv2.imshow("cam", camCv)
cv2.imshow("manip1", manip1Cv)
cv2.imshow("manip2", manip2Cv)
if cv2.waitKey(1) == ord('q'):
break
30 changes: 30 additions & 0 deletions examples/python/RVC2/Camera/camera_undistortion.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/usr/bin/env python3

import cv2
import depthai as dai

# Create pipeline
with dai.Pipeline() as pipeline:
# Define source and output
cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
croppedQueue = cam.requestOutput((300,300), resizeMode=dai.ImgResizeMode.CROP, enableUndistortion=True).createOutputQueue()
stretchedQueue = cam.requestOutput((300,300), resizeMode=dai.ImgResizeMode.STRETCH, enableUndistortion=True).createOutputQueue()
letterBoxedQueue = cam.requestOutput((300,300), resizeMode=dai.ImgResizeMode.LETTERBOX, enableUndistortion=True).createOutputQueue()

# Connect to device and start pipeline
pipeline.start()
while pipeline.isRunning():
croppedIn = croppedQueue.get()
assert isinstance(croppedIn, dai.ImgFrame)
cv2.imshow("cropped undistorted", croppedIn.getCvFrame())

stretchedIn = stretchedQueue.get()
assert isinstance(stretchedIn, dai.ImgFrame)
cv2.imshow("stretched undistorted", stretchedIn.getCvFrame())

letterBoxedIn = letterBoxedQueue.get()
assert isinstance(letterBoxedIn, dai.ImgFrame)
cv2.imshow("letterboxed undistorted", letterBoxedIn.getCvFrame())

if cv2.waitKey(1) == ord("q"):
break
Loading
Loading