将屏幕空间转换为世界空间以在python中创建点云

2024-09-27 00:20:46 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试将屏幕空间坐标(2D)转换为世界空间(3D),以便用python语言生成点云。给我的是投影矩阵、视图矩阵和深度图像。我正试图遵循以下步骤:Getting World Position from Depth Buffer Value

到目前为止,我已经提出了以下代码:

import random
import numpy as np

origin = camera[:-1]

clipSpaceLocation =[]
m_points = []

# Matrix multipication of projection and then view and finally inverse of it
IViewProj = np.linalg.inv(proj @ view)
                          
for y in range(height):
    for x in range(width):        
                
        # 4x1
        # depth image with grayscale values from 0-255
        clipSpaceLocation = np.array([(x / width) * 2 - 1,
                                      (y / height) * 2 - 1,
                                       depth[y,x] * 2 - 1,
                                      1])

        # 4x4 @ 4x1 -> 4x1
        worldSpaceLocation = IViewProj @ clipSpaceLocation
        # perspective division
        worldSpaceLocation /= worldSpaceLocation[-1]
        worldSpaceV3 = worldSpaceLocation[:-1]
        m_points.append(worldSpaceV3)
                   
        
m_points = np.array(m_points)

m_点是[xyz]位置,我最终在点云上绘制,但它没有给出正确的结果,它基本上给了我深度图像的点云。有人能帮我吗


Tags: andofinfrom图像importviewfor
1条回答
网友
1楼 · 发布于 2024-09-27 00:20:46

我已经想出了解决办法。如果有人在用Python寻找答案,这就是解决方案:

@staticmethod
def read_pc_from_layers(file_cam, file_depth, file_colour):
    origin, projection, view, = read_cam_file(file_cam)
    depth = imageio.imread(file_depth)
    colour = imageio.imread(file_colour)
    i_view_projection = np.linalg.inv(view @ projection)
    width = depth.shape[1]
    height = depth.shape[0]
    vertices = []
    colours = []
    point_cloud = PointCloud()
    for y in range(height):
        for x in range(width):
            #map to [0,1]
            d = depth[height - y - 1][x][0] / 255.0
            # check for valid values
            if 0.00001 < d < 0.99999999999:
                clip_space_location = np.array([(x / width) * 2 - 1, (y / height) * 2 - 1, d * 2 - 1, 1])
                world_space_location = clip_space_location @ i_view_projection
                world_space_location /= world_space_location[3]
                colours.append(colour[height - y - 1][x])
                vertices.append(world_space_location[0:3])
    point_cloud.vertices = np.asarray(vertices)
    point_cloud.colours_luminance = np.asarray(colours).astype(np.uint8)
    point_cloud.colours_labels = np.asarray(colours).astype(np.uint8)
    return point_cloud

相关问题 更多 >

    热门问题