{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "nbsphinx": "hidden" }, "outputs": [], "source": [ "import open3d as o3d\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import copy\n", "import os\n", "import sys\n", "\n", "# only needed for tutorial, monkey patches visualization\n", "sys.path.append('..')\n", "import open3d_tutorial as o3dtut\n", "# change to True if you want to interact with the visualization windows\n", "o3dtut.interactive = not \"CI\" in os.environ" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Point Cloud\n", "This tutorial demonstrates basic usage of a point cloud.\n", "\n", "## Visualize point cloud\n", "The first part of the tutorial reads a point cloud and visualizes it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Load a ply point cloud, print it, and render it\")\n", "pcd = o3d.io.read_point_cloud(\"../../TestData/fragment.ply\")\n", "print(pcd)\n", "print(np.asarray(pcd.points))\n", "o3d.visualization.draw_geometries([pcd], zoom=0.3412, \n", " front=[0.4257, -0.2125, -0.8795],\n", " lookat=[2.6172, 2.0475, 1.532],\n", " up=[-0.0694, -0.9768, 0.2024])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`read_point_cloud` reads a point cloud from a file. It tries to decode the file based on the extension name. The supported extension names are: `pcd`, `ply`, `xyz`, `xyzrgb`, `xyzn`, `pts`.\n", "\n", "`draw_geometries` visualizes the point cloud. Use mouse/trackpad to see the geometry from different view point.\n", "\n", "It looks like a dense surface, but it is actually a point cloud rendered as surfels. The GUI supports various keyboard functions. One of them, the - key reduces the size of the points (surfels).\n", "\n", "
\n", " \n", "**Note:** \n", "\n", "Press `h` key to print out a complete list of keyboard instructions for the GUI. For more information of the visualization GUI, refer to [Visualization](visualization.ipynb) and [Customized visualization](../Advanced/customized_visualization.rst).\n", "\n", "
\n", "\n", "
\n", " \n", "**Note:** \n", "\n", "On OS X, the GUI window may not receive keyboard event. In this case, try to launch Python with `pythonw` instead of `python`.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Voxel downsampling\n", "Voxel downsampling uses a regular voxel grid to create a uniformly downsampled point cloud from an input point cloud. It is often used as a pre-processing step for many point cloud processing tasks. The algorithm operates in two steps:\n", "\n", "1. Points are bucketed into voxels.\n", "2. Each occupied voxel generates exact one point by averaging all points inside." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Downsample the point cloud with a voxel of 0.05\")\n", "downpcd = pcd.voxel_down_sample(voxel_size=0.05)\n", "o3d.visualization.draw_geometries([downpcd], zoom=0.3412, \n", " front=[0.4257, -0.2125, -0.8795],\n", " lookat=[2.6172, 2.0475, 1.532],\n", " up=[-0.0694, -0.9768, 0.2024])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Vertex normal estimation\n", "Another basic operation for point cloud is point normal estimation.\n", "Press n to see point normal. Key - and key + can be used to control the length of the normal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Recompute the normal of the downsampled point cloud\")\n", "downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))\n", "o3d.visualization.draw_geometries([downpcd], zoom=0.3412, \n", " front=[0.4257, -0.2125, -0.8795],\n", " lookat=[2.6172, 2.0475, 1.532],\n", " up=[-0.0694, -0.9768, 0.2024], \n", " point_show_normal=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`estimate_normals` computes normal for every point. The function finds adjacent points and calculate the principal axis of the adjacent points using covariance analysis.\n", "\n", "The function takes an instance of `KDTreeSearchParamHybrid` class as an argument. The two key arguments `radius = 0.1` and `max_nn = 30` specifies search radius and maximum nearest neighbor. It has 10cm of search radius, and only considers up to 30 neighbors to save computation time.\n", "\n", "
\n", " \n", "**Note:** \n", "\n", "The covariance analysis algorithm produces two opposite directions as normal candidates. Without knowing the global structure of the geometry, both can be correct. This is known as the normal orientation problem. Open3D tries to orient the normal to align with the original normal if it exists. Otherwise, Open3D does a random guess. Further orientation functions such as `orient_normals_to_align_with_direction` and `orient_normals_towards_camera_location` need to be called if the orientation is a concern.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Access estimated vertex normal\n", "Estimated normal vectors can be retrieved by `normals` variable of `downpcd`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Print a normal vector of the 0th point\")\n", "print(downpcd.normals[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To check out other variables, please use `help(downpcd)`. Normal vectors can be transformed as a numpy array using `np.asarray`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Print the normal vectors of the first 10 points\")\n", "print(np.asarray(downpcd.normals)[:10, :])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check [Working with NumPy](working_with_numpy.ipynb) for more examples regarding numpy array." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Crop point cloud" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Load a polygon volume and use it to crop the original point cloud\")\n", "vol = o3d.visualization.read_selection_polygon_volume(\"../../TestData/Crop/cropped.json\")\n", "chair = vol.crop_point_cloud(pcd)\n", "o3d.visualization.draw_geometries([chair], zoom=0.7, \n", " front=[0.5439, -0.2333, -0.8060],\n", " lookat=[2.4615, 2.1331, 1.338],\n", " up=[-0.1781, -0.9708, 0.1608])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`read_selection_polygon_volume` reads a json file that specifies polygon selection area. `vol.crop_point_cloud(pcd)` filters out points. Only the chair remains." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Paint point cloud" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Paint chair\")\n", "chair.paint_uniform_color([1, 0.706, 0])\n", "o3d.visualization.draw_geometries([chair], zoom=0.7, \n", " front=[0.5439, -0.2333, -0.8060],\n", " lookat=[2.4615, 2.1331, 1.338],\n", " up=[-0.1781, -0.9708, 0.1608])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`paint_uniform_color` paints all the points to a uniform color. The color is in RGB space, [0, 1] range." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bounding Volumes\n", "The `PointCloud` geometry type has bounding volumes as all other geometry types in Open3D. Currently, Open3D implements an `AxisAlignedBoundingBox` and an `OrientedBoundingBox` that can also be used to crop the geometry." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aabb = chair.get_axis_aligned_bounding_box()\n", "aabb.color = (1,0,0)\n", "obb = chair.get_oriented_bounding_box()\n", "obb.color = (0,1,0)\n", "o3d.visualization.draw_geometries([chair, aabb, obb], zoom=0.7, \n", " front=[0.5439, -0.2333, -0.8060],\n", " lookat=[2.4615, 2.1331, 1.338],\n", " up=[-0.1781, -0.9708, 0.1608])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convex hull\n", "The convex hull of a point cloud is the smallest convex set that contains all points. Open3D contains the method `compute_convex_hull` that computes the convex hull for example of a point cloud. The implementation is based on [Qhull](http://www.qhull.org/)\n", "\n", "In the example code below we first sample a point cloud from a mesh and compute the convex hull that is returned as triangle mesh. Then, we visualize the convex hull as a red `LineSet`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pcl = o3dtut.get_bunny_mesh().sample_points_poisson_disk(number_of_points=2000)\n", "hull, _ = pcl.compute_convex_hull()\n", "hull_ls = o3d.geometry.LineSet.create_from_triangle_mesh(hull)\n", "hull_ls.paint_uniform_color((1, 0, 0))\n", "o3d.visualization.draw_geometries([pcl, hull_ls])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DBSCAN clustering\n", "Given a point clound for example from a depth sensor we want to group local point clouds together. For this purpose, we can use clustering algorithms. Open3D implements DBSCAN [\\[Ester1996\\]](../reference.html#Ester1996) that is a density based clustering algorithm. The algorithm is implemented in `cluster_dbscan` and requires two parameters. `eps` defines the distance to neighbours in a cluster and `min_points` defines the minimun number of points required to form a cluster. The function returns `labels`, where the label `-1` indicates noise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pcd = o3d.io.read_point_cloud(\"../../TestData/fragment.ply\")\n", "\n", "with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:\n", " labels = np.array(pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True))\n", "\n", "max_label = labels.max()\n", "print(f\"point cloud has {max_label + 1} clusters\")\n", "colors = plt.get_cmap(\"tab20\")(labels / (max_label if max_label > 0 else 1))\n", "colors[labels < 0] = 0\n", "pcd.colors = o3d.utility.Vector3dVector(colors[:, :3])\n", "o3d.visualization.draw_geometries([pcd], zoom=0.455, \n", " front=[-0.4999, -0.1659, -0.8499],\n", " lookat=[2.1813, 2.0619, 2.0999],\n", " up=[0.1204, -0.9852, 0.1215])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " \n", "**Note:** \n", "\n", "This algorithm precomputes all neighbours in the epsilon radius for all points. This can require a lot of memory if epsilon is choosen too large.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plane segmentation\n", "Open3D contains also support to segment geometric primitives from point clouds using RANSAC. To find the plane with the largest support in the point cloud, we can use `segement_plane`. The method has three arguments. `distance_threshold` defines the maximum distance a point can have to an estimated plane to be considered an inlier, `ransac_n` defines the number of points that are randomly sampled to estimate a plane, and `num_iterations` defines how often a random plane is sampled and verified. The function than returns the plane as $(a,b,c,d)$ such that for each point $(x,y,z)$ on the plane we have $ax + by + cz + d = 0$. The function further retuns a list of indices of the inlier points." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pcd = o3d.io.read_point_cloud(\"../../TestData/fragment.pcd\")\n", "plane_model, inliers = pcd.segment_plane(distance_threshold=0.01,\n", " ransac_n=3,\n", " num_iterations=1000)\n", "[a, b, c, d] = plane_model\n", "print(f\"Plane equation: {a:.2f}x + {b:.2f}y + {c:.2f}z + {d:.2f} = 0\")\n", "\n", "inlier_cloud = pcd.select_by_index(inliers)\n", "inlier_cloud.paint_uniform_color([1.0, 0, 0])\n", "outlier_cloud = pcd.select_by_index(inliers, invert=True)\n", "o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud], zoom=0.8, \n", " front=[-0.4999, -0.1659, -0.8499],\n", " lookat=[2.1813, 2.0619, 2.0999],\n", " up=[0.1204, -0.9852, 0.1215])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hidden point removal\n", "Imagine you want to render a point cloud from a given view point, but points from the background leak into the foreground because they are not occluded by other points. For this purpose we can apply a hidden point removal algorithm. In Open3D the method by [\\[Katz2007\\]](../reference.html#Katz2007) is implemented that approximates the visibility of a point cloud from a given view without surface reconstruction or normal estimation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Convert mesh to a point cloud and estimate dimensions\")\n", "pcd = o3dtut.get_armadillo_mesh().sample_points_poisson_disk(5000)\n", "diameter = np.linalg.norm(np.asarray(pcd.get_max_bound()) - np.asarray(pcd.get_min_bound()))\n", "o3d.visualization.draw_geometries([pcd])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Define parameters used for hidden_point_removal\")\n", "camera = [0, 0, diameter]\n", "radius = diameter * 100\n", "\n", "print(\"Get all points that are visible from given view point\")\n", "_, pt_map = pcd.hidden_point_removal(camera, radius)\n", "\n", "print(\"Visualize result\")\n", "pcd = pcd.select_by_index(pt_map)\n", "o3d.visualization.draw_geometries([pcd])" ] } ], "metadata": { "celltoolbar": "Edit Metadata", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" } }, "nbformat": 4, "nbformat_minor": 2 }