Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RVC2 img transformations #1186

Merged
merged 71 commits into from
Jan 16, 2025
Merged
Show file tree
Hide file tree
Changes from 56 commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
06a8c50
Fix ImgTransformations docs
asahtik Nov 12, 2024
ccac782
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 19, 2024
85ea99d
Merge branch 'v3_image_manip_v2_size_fix' of github.com:luxonis/depth…
asahtik Nov 20, 2024
dbca122
ImgTransformations fixes & improvements
asahtik Nov 20, 2024
ff149e6
Bump rvc2 fw
asahtik Nov 20, 2024
3945716
Added stereodepth example
asahtik Nov 21, 2024
e994d49
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 21, 2024
29e523e
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Nov 25, 2024
6e76aed
Bump rvc2 firmware
asahtik Nov 27, 2024
6b05464
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 3, 2024
7e70e81
Do not align depth in SpatialDetectionNetwork
asahtik Dec 5, 2024
74fc947
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 6, 2024
f66b464
Make aligned images sort of work
asahtik Dec 6, 2024
d59e370
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 9, 2024
1a322b5
Bump rvc2 fw
asahtik Dec 9, 2024
df389c3
ImgTransformations ImageAlign improvements
asahtik Dec 9, 2024
2718bbb
Added some setters to ImgTransformations
asahtik Dec 9, 2024
9725f6e
Bump rvc4 fw
asahtik Dec 9, 2024
bd610ec
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 10, 2024
276199a
Add examples
asahtik Dec 10, 2024
aab044f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 10, 2024
4b8acf9
Bump rvc2 fw
asahtik Dec 10, 2024
daa6607
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 11, 2024
54efee8
Bump rvc2 fw
asahtik Dec 11, 2024
6ed3769
Bump rvc4 fw
asahtik Dec 11, 2024
de50194
Re-add depth align to spatial detection network
asahtik Dec 11, 2024
b4f0772
Clamp remapped rect in SpatialDetectionNetwork
asahtik Dec 11, 2024
48b2db5
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 12, 2024
3fa2ba2
ImgTransformations fixes for aligned depth
asahtik Dec 12, 2024
bdf2393
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 13, 2024
884cc06
Bump rvc2 fw with merge [no ci]
asahtik Dec 13, 2024
23b3011
Bump rvc4 fw with merge
asahtik Dec 13, 2024
8d3db28
FW: fix video encoder crash
asahtik Dec 13, 2024
dfbccf6
FW: fix SpatialDetectionNetwork roi error
asahtik Dec 13, 2024
47604c5
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 14, 2024
8cec2b8
FW: Fix transformationdata in aligned stereo depth images
asahtik Dec 16, 2024
8f25dfe
FW: Add sanity check when copying transformation data
asahtik Dec 17, 2024
3c8373c
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 17, 2024
ed0adc1
Add missing bindings
Dec 17, 2024
4ca93ad
Update the depth_align.py example and bump RVC2 FW with fixed distort…
Dec 17, 2024
5d5099c
Merge branch 'rvc2_img_transformations_v2' of github.com:luxonis/dept…
asahtik Dec 18, 2024
607f91f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 18, 2024
461c9e8
Fix ImageManipV2 crash
asahtik Dec 18, 2024
824bbae
Bump rvc2 fw
asahtik Dec 18, 2024
741a6a8
Bump rvc4 fw
asahtik Dec 18, 2024
848bd85
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Dec 24, 2024
80793f6
FW: hopefully fix videnc crash
asahtik Dec 24, 2024
26ba7b9
FW: new attempt at fixing videnc crash on RVC2
asahtik Dec 24, 2024
ce2ad64
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 6, 2025
9ec8927
Fix encoded_frame_tests [no ci]
asahtik Jan 6, 2025
223ef97
Bump rvc4 fw
asahtik Jan 6, 2025
06b74c2
FW: Add missing header
asahtik Jan 6, 2025
3a6a3b6
FW: Fix imagemanipv2 issue on RVC2
asahtik Jan 6, 2025
025e925
Fix imagemanipv2 issue
asahtik Jan 6, 2025
04c1f65
Fix build issue [no ci]
asahtik Jan 6, 2025
17125f9
Bump rvc4 fw
asahtik Jan 6, 2025
49e401b
Add test
asahtik Jan 7, 2025
46f478f
Colorize frame more generically
Jan 7, 2025
cc49040
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 8, 2025
9c8a87d
Bump rvc2 fw
asahtik Jan 8, 2025
0ebfe9a
Add example for transformationdata [no ci]
asahtik Jan 9, 2025
fe7061a
FW: PR fixes
asahtik Jan 9, 2025
a149948
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 9, 2025
bb0d85f
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 9, 2025
289b513
Bump rvc4 fw
asahtik Jan 9, 2025
19c0394
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 10, 2025
8db82bb
FW: handle missing calib when setting initial transformation
asahtik Jan 10, 2025
5cc19b1
Fix example
asahtik Jan 10, 2025
6fc514b
FW: fix crash when using Cono/ColorCamera
asahtik Jan 10, 2025
5ab530e
Fix example crash on rvc4, bump rvc4 fw
asahtik Jan 13, 2025
1065d3a
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into rvc…
asahtik Jan 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions bindings/python/src/pipeline/datatype/ImgFrameBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,20 @@ void bind_imgframe(pybind11::module& m, void* pCallstack) {
.def("getSourceIntrinsicMatrixInv", &ImgTransformation::getSourceIntrinsicMatrixInv)
.def("getIntrinsicMatrix", &ImgTransformation::getIntrinsicMatrix)
.def("getIntrinsicMatrixInv", &ImgTransformation::getIntrinsicMatrixInv)
.def("getDistortionModel", &ImgTransformation::getDistortionModel, DOC(dai, ImgTransformation, getDistortionModel))
.def("getDistortionCoefficients", &ImgTransformation::getDistortionCoefficients, DOC(dai, ImgTransformation, getDistortionCoefficients))
.def("getSrcCrops", &ImgTransformation::getSrcCrops, DOC(dai, ImgTransformation, getSrcCrops))
.def("getSrcMaskPt", &ImgTransformation::getSrcMaskPt, py::arg("x"), py::arg("y"), DOC(dai, ImgTransformation, getSrcMaskPt))
.def("getDstMaskPt", &ImgTransformation::getDstMaskPt, py::arg("x"), py::arg("y"), DOC(dai, ImgTransformation, getDstMaskPt))
.def("getDFov", &ImgTransformation::getDFov, py::arg("source") = false)
.def("getHFov", &ImgTransformation::getHFov, py::arg("source") = false)
.def("getVFov", &ImgTransformation::getVFov, py::arg("source") = false)
.def("setIntrinsicMatrix", &ImgTransformation::setIntrinsicMatrix, py::arg("intrinsicMatrix"), DOC(dai, ImgTransformation, setIntrinsicMatrix))
.def("setDistortionModel", &ImgTransformation::setDistortionModel, py::arg("model"), DOC(dai, ImgTransformation, setDistortionModel))
.def("setDistortionCoefficients",
&ImgTransformation::setDistortionCoefficients,
py::arg("coefficients"),
DOC(dai, ImgTransformation, setDistortionCoefficients))
.def("addTransformation", &ImgTransformation::addTransformation, py::arg("matrix"), DOC(dai, ImgTransformation, addTransformation))
.def("addCrop", &ImgTransformation::addCrop, py::arg("x"), py::arg("y"), py::arg("width"), py::arg("height"), DOC(dai, ImgTransformation, addCrop))
.def("addPadding",
Expand Down Expand Up @@ -201,6 +212,7 @@ void bind_imgframe(pybind11::module& m, void* pCallstack) {
.def("getSourceWidth", &ImgFrame::getSourceWidth, DOC(dai, ImgFrame, getSourceWidth))
.def("getSourceHeight", &ImgFrame::getSourceHeight, DOC(dai, ImgFrame, getSourceHeight))
.def("getTransformation", [](ImgFrame& msg) {return msg.transformation;})
.def("validateTransformations", &ImgFrame::validateTransformations, DOC(dai, ImgFrame, validateTransformations))

#ifdef DEPTHAI_HAVE_OPENCV_SUPPORT
// The cast function itself does a copy, so we can avoid two copies by always not copying
Expand Down
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceRVC4Config.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ set(DEPTHAI_DEVICE_RVC4_MATURITY "snapshot")

# "version if applicable"
# set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+93f7b75a885aa32f44c5e9f53b74470c49d2b1af")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+6fc71e8674fe7a520b93f4370cb157805f0bc0f2")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+38fbf795d3d5335f409db0247c9cfbada0e66c49")
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceSideConfig.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
set(DEPTHAI_DEVICE_SIDE_MATURITY "snapshot")

# "full commit hash of device side binary"
set(DEPTHAI_DEVICE_SIDE_COMMIT "052ee5648d9d5a49e6367c4a81d14a9b9ae8bfcf")
set(DEPTHAI_DEVICE_SIDE_COMMIT "435d0b10d52cb3e78b30f64b9a6e2e28da28cc76")

# "version if applicable"
set(DEPTHAI_DEVICE_SIDE_VERSION "")
3 changes: 3 additions & 0 deletions examples/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -273,6 +273,9 @@ dai_add_example(record_imu RVC2/Record/record_imu.cpp OFF OFF)
dai_add_example(replay_video_meta RVC2/Replay/replay_video_meta.cpp OFF OFF)
dai_add_example(replay_imu RVC2/Replay/replay_imu.cpp OFF OFF)

# StereoDepth
dai_add_example(depth_preview StereoDepth/depth_preview.cpp OFF OFF)

# Thermal
dai_add_example(thermal RVC2/Thermal/thermal.cpp OFF OFF)

Expand Down
8 changes: 3 additions & 5 deletions examples/cpp/ImageManip/image_manip_v2_mod.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,10 @@ int main(int argc, char** argv) {
}
dai::Pipeline pipeline(device);

auto camRgb = pipeline.create<dai::node::ColorCamera>();
auto camRgb = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
auto display = pipeline.create<dai::node::Display>();
auto manip = pipeline.create<dai::node::ImageManipV2>();

camRgb->setBoardSocket(dai::CameraBoardSocket::CAM_A);
camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);

manip->setMaxOutputFrameSize(4000000);
manip->initialConfig.setOutputSize(1280, 720, dai::ImageManipConfigV2::ResizeMode::LETTERBOX);
manip->initialConfig.setBackgroundColor(100, 100, 100);
Expand All @@ -29,7 +26,8 @@ int main(int argc, char** argv) {
manip->initialConfig.addFlipVertical();
manip->initialConfig.setFrameType(dai::ImgFrame::Type::RGB888p);

camRgb->video.link(manip->inputImage);
auto* rgbOut = camRgb->requestOutput({1920, 1080});
rgbOut->link(manip->inputImage);
manip->out.link(display->input);

pipeline.start();
Expand Down
65 changes: 65 additions & 0 deletions examples/cpp/StereoDepth/depth_preview.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#include <iostream>

// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
#include "depthai/pipeline/datatype/StereoDepthConfig.hpp"
#include "depthai/pipeline/node/StereoDepth.hpp"
#include "depthai/properties/StereoDepthProperties.hpp"

// Closer-in minimum depth, disparity range is doubled (from 95 to 190):
static std::atomic<bool> extended_disparity{false};
// Better accuracy for longer distance, fractional disparity 32-levels:
static std::atomic<bool> subpixel{false};
// Better handling for occlusions:
static std::atomic<bool> lr_check{true};

int main() {
// Create pipeline
dai::Pipeline pipeline;

// Define sources and outputs
auto monoLeft = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
auto monoRight = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
auto depth = pipeline.create<dai::node::StereoDepth>();

// Properties
auto* lout = monoLeft->requestOutput({640, 400});
auto* rout = monoRight->requestOutput({640, 400});

// Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth->build(*lout, *rout, dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
// Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
depth->initialConfig.setMedianFilter(dai::StereoDepthConfig::MedianFilter::KERNEL_7x7);
depth->setLeftRightCheck(lr_check);
depth->setExtendedDisparity(extended_disparity);
depth->setSubpixel(subpixel);

// Output queue will be used to get the disparity frames from the outputs defined above
auto q = depth->disparity.createOutputQueue();
auto qleft = lout->createOutputQueue();

pipeline.start();

while(true) {
auto inDepth = q->get<dai::ImgFrame>();
auto inLeft = qleft->get<dai::ImgFrame>();
auto frame = inDepth->getFrame();
// Normalization for better visualization
frame.convertTo(frame, CV_8UC1, 255 / depth->initialConfig.getMaxDisparity());

std::cout << "Left type: " << inLeft->fb.str() << std::endl;

cv::imshow("disparity", frame);

// Available color maps: https://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html
cv::applyColorMap(frame, frame, cv::COLORMAP_JET);
cv::imshow("disparity_color", frame);

int key = cv::waitKey(1);
if(key == 'q' || key == 'Q') {
break;
}
}
pipeline.stop();
return 0;
}
65 changes: 65 additions & 0 deletions examples/python/ImageAlign/aligned_remap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import numpy as np
import depthai as dai
import cv2

def draw_rotated_rectangle(frame, center, size, angle, color, thickness=2):
"""
Draws a rotated rectangle on the given frame.

Args:
frame (numpy.ndarray): The image/frame to draw on.
center (tuple): The (x, y) coordinates of the rectangle's center.
size (tuple): The (width, height) of the rectangle.
angle (float): The rotation angle of the rectangle in degrees (counter-clockwise).
color (tuple): The color of the rectangle in BGR format (e.g., (0, 255, 0) for green).
thickness (int): The thickness of the rectangle edges. Default is 2.
"""
# Create a rotated rectangle
rect = ((center[0], center[1]), (size[0], size[1]), angle)

# Get the four vertices of the rotated rectangle
box = cv2.boxPoints(rect)
box = np.int0(box) # Convert to integer coordinates

# Draw the rectangle on the frame
cv2.polylines(frame, [box], isClosed=True, color=color, thickness=thickness)

with dai.Pipeline() as pipeline:
monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
align = pipeline.create(dai.node.ImageAlign)

monoLeftOut = monoLeft.requestOutput((640, 480))
monoRightOut = monoRight.requestOutput((640, 480))

monoLeftOut.link(align.input)
monoRightOut.link(align.inputAlignTo)

rightOut = monoRightOut.createOutputQueue()
alignOut = align.outputAligned.createOutputQueue()

pipeline.start()
while pipeline.isRunning():
rightFrame = rightOut.get()
alignedFrame = alignOut.get()

print(alignedFrame)

assert rightFrame.validateTransformations()
assert alignedFrame.validateTransformations()

right = rightFrame.getCvFrame()
depth = alignedFrame.getCvFrame()

rect = dai.RotatedRect(dai.Point2f(200, 100), dai.Size2f(200, 100), 10)
remappedRect = rightFrame.getTransformation().remapRectTo(alignedFrame.getTransformation(), rect)

draw_rotated_rectangle(right, (rect.center.x, rect.center.y), (rect.size.width, rect.size.height), rect.angle, (255, 0, 0))
draw_rotated_rectangle(depth, (remappedRect.center.x, remappedRect.center.y), (remappedRect.size.width, remappedRect.size.height), remappedRect.angle, (255, 0, 0))

cv2.imshow("right", right)
cv2.imshow("aligned", depth)

if cv2.waitKey(1) == ord('q'):
break
pipeline.stop()
19 changes: 3 additions & 16 deletions examples/python/ImageAlign/depth_align.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,7 @@ def getFps(self):
return 0
return (len(self.frameTimes) - 1) / (self.frameTimes[-1] - self.frameTimes[0])

device = dai.Device()

calibrationHandler = device.readCalibration()
rgbDistortion = calibrationHandler.getDistortionCoefficients(RGB_SOCKET)
distortionModel = calibrationHandler.getDistortionModel(RGB_SOCKET)
if distortionModel != dai.CameraModel.Perspective:
raise RuntimeError("Unsupported distortion model for RGB camera. This example supports only Perspective model.")

pipeline = dai.Pipeline(device)
pipeline = dai.Pipeline()

platform = pipeline.getDefaultDevice().getPlatform()

Expand All @@ -47,7 +39,6 @@ def getFps(self):
align = pipeline.create(dai.node.ImageAlign)

stereo.setExtendedDisparity(True)

sync.setSyncThreshold(timedelta(seconds=1/(2*FPS)))

rgbOut = camRgb.requestOutput(size = (1280, 960), fps = FPS)
Expand Down Expand Up @@ -142,14 +133,10 @@ def updateBlendWeights(percentRgb):
# Blend when both received
if frameDepth is not None:
cvFrame = frameRgb.getCvFrame()

# Undistort the rgb frame
rgbIntrinsics = calibrationHandler.getCameraIntrinsics(RGB_SOCKET, int(cvFrame.shape[1]), int(cvFrame.shape[0]))

cvFrameUndistorted = cv2.undistort(
cvFrame,
np.array(rgbIntrinsics),
np.array(rgbDistortion),
np.array(frameRgb.getTransformation().getIntrinsicMatrix()),
np.array(frameRgb.getTransformation().getDistortionCoefficients()),
)
# Colorize the aligned depth
alignedDepthColorized = colorizeDepth(frameDepth.getFrame())
Expand Down
84 changes: 84 additions & 0 deletions examples/python/StereoDepth/stereo_depth_remap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
import depthai as dai
import cv2
import numpy as np

def draw_rotated_rectangle(frame, center, size, angle, color, thickness=2):
"""
Draws a rotated rectangle on the given frame.

Args:
frame (numpy.ndarray): The image/frame to draw on.
center (tuple): The (x, y) coordinates of the rectangle's center.
size (tuple): The (width, height) of the rectangle.
angle (float): The rotation angle of the rectangle in degrees (counter-clockwise).
color (tuple): The color of the rectangle in BGR format (e.g., (0, 255, 0) for green).
thickness (int): The thickness of the rectangle edges. Default is 2.
"""
# Create a rotated rectangle
rect = ((center[0], center[1]), (size[0], size[1]), angle)

# Get the four vertices of the rotated rectangle
box = cv2.boxPoints(rect)
box = np.intp(box) # Convert to integer coordinates

# Draw the rectangle on the frame
cv2.polylines(frame, [box], isClosed=True, color=color, thickness=thickness)

def processDepthFrame(depthFrame):
depth_downscaled = depthFrame[::4]
if np.all(depth_downscaled == 0):
min_depth = 0
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
return cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)

with dai.Pipeline() as pipeline:
color = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
stereo = pipeline.create(dai.node.StereoDepth)

stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
# stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
# stereo.setOutputSize(640, 400)

colorCamOut = color.requestOutput((640, 480))

monoLeftOut = monoLeft.requestOutput((640, 480))
monoRightOut = monoRight.requestOutput((640, 480))

monoLeftOut.link(stereo.left)
monoRightOut.link(stereo.right)

colorOut = colorCamOut.createOutputQueue()
rightOut = monoRightOut.createOutputQueue()
stereoOut = stereo.depth.createOutputQueue()

pipeline.start()
while pipeline.isRunning():
colorFrame = colorOut.get()
stereoFrame = stereoOut.get()

assert colorFrame.validateTransformations()
assert stereoFrame.validateTransformations()

clr = colorFrame.getCvFrame()
depth = processDepthFrame(stereoFrame.getCvFrame())

rect = dai.RotatedRect(dai.Point2f(300, 200), dai.Size2f(200, 100), 10)
remappedRect = colorFrame.getTransformation().remapRectTo(stereoFrame.getTransformation(), rect)

print(f"Original rect x: {rect.center.x} y: {rect.center.y} width: {rect.size.width} height: {rect.size.height} angle: {rect.angle}")
print(f"Remapped rect x: {remappedRect.center.x} y: {remappedRect.center.y} width: {remappedRect.size.width} height: {remappedRect.size.height} angle: {remappedRect.angle}")

draw_rotated_rectangle(clr, (rect.center.x, rect.center.y), (rect.size.width, rect.size.height), rect.angle, (255, 0, 0))
draw_rotated_rectangle(depth, (remappedRect.center.x, remappedRect.center.y), (remappedRect.size.width, remappedRect.size.height), remappedRect.angle, (255, 0, 0))

cv2.imshow("color", clr)
cv2.imshow("depth", depth)

if cv2.waitKey(1) == ord('q'):
break
pipeline.stop()
15 changes: 10 additions & 5 deletions include/depthai/common/ImgTransformations.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -177,12 +177,12 @@ struct ImgTransformation {
ImgTransformation& addCrop(int x, int y, int width, int height);
/**
* Add a pad transformation. Works like crop, but in reverse.
* @param x Padding on the left. The x coordinate of the top-left corner in the new image.
* @param y Padding on the top. The y coordinate of the top-left corner in the new image.
* @param width New image width
* @param height New image height
* @param top Padding on the top
* @param bottom Padding on the bottom
* @param left Padding on the left
* @param right Padding on the right
*/
ImgTransformation& addPadding(int x, int y, int width, int height);
ImgTransformation& addPadding(int top, int bottom, int left, int right);
/**
* Add a vertical flip transformation.
*/
Expand All @@ -204,6 +204,11 @@ struct ImgTransformation {
*/
ImgTransformation& addScale(float scaleX, float scaleY);
ImgTransformation& addSrcCrops(const std::vector<dai::RotatedRect>& crops);
ImgTransformation& setSize(size_t width, size_t height);
ImgTransformation& setSourceSize(size_t width, size_t height);
ImgTransformation& setIntrinsicMatrix(std::array<std::array<float, 3>, 3> intrinsicMatrix);
ImgTransformation& setDistortionModel(CameraModel model);
ImgTransformation& setDistortionCoefficients(std::vector<float> coefficients);

/**
* Remap a point from this transformation to another. If the intrinsics are different (e.g. different camera), the function will also use the intrinsics to
Expand Down
9 changes: 3 additions & 6 deletions include/depthai/utility/ImageManipV2Impl.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -2811,15 +2811,12 @@ void Warp<ImageManipBuffer, ImageManipData>::build(const FrameSpecs srcFrameSpec
sourceMaxY = inHeight;
for(const auto& corners : srcCorners) {
auto [minx, maxx, miny, maxy] = getOuterRect(std::vector<std::array<float, 2>>(corners.begin(), corners.end()));
minx = std::max(minx, 0.0f);
maxx = std::min(maxx, (float)inWidth);
miny = std::max(miny, 0.0f);
maxy = std::min(maxy, (float)inHeight);
sourceMinX = std::max(sourceMinX, (size_t)std::floor(minx));
sourceMinY = std::max(sourceMinY, (size_t)std::floor(miny));
sourceMinX = std::max(sourceMinX, (size_t)std::floor(std::max(minx, 0.f)));
sourceMinY = std::max(sourceMinY, (size_t)std::floor(std::max(miny, 0.f)));
sourceMaxX = std::min(sourceMaxX, (size_t)std::ceil(maxx));
sourceMaxY = std::min(sourceMaxY, (size_t)std::ceil(maxy));
}
if(sourceMinX >= sourceMaxX || sourceMinY >= sourceMaxY) throw std::runtime_error("Initial crop is outside the source image");

#if !DEPTHAI_IMAGEMANIPV2_OPENCV && !DEPTHAI_IMAGEMANIPV2_FASTCV || !defined(DEPTHAI_HAVE_OPENCV_SUPPORT) && !defined(DEPTHAI_HAVE_FASTCV_SUPPORT)
const uint32_t outWidth = dstFrameSpecs.width;
Expand Down
Loading
Loading