Google Project Tango獲取深度信息的原理是什麼?
看圖說話
一、深度信息獲取方法
&" dw="554" dh="187" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic2.zhimg.com/09ddfae0e0b51a932b3c408c7a642405_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/09ddfae0e0b51a932b3c408c7a642405_hd.jpg"> 1、結構光(Structured Light)獲取深度信息:
&" dw="306" dh="198" class="content_image lazy" w="306" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/9c70cf39f72aea3d84ae74c655310b4a_hd.jpg">線光源獲取深度信息,需要掃描&" dw="390" dh="296" class="content_image lazy" w="390" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/b0b9c6cf23690fe685e3978325f13572_hd.jpg">
面光源獲取深度信息,一次完成2、飛行時間TOF(Time of Flight)獲取深度信息
原理很簡單,測量光從發射到返回需要的時間,算出距離,獲取深度信息。
&" dw="554" dh="193" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic3.zhimg.com/8f6c83612e9167137f4f6515cd7286c9_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/8f6c83612e9167137f4f6515cd7286c9_hd.jpg">&" dw="374" dh="292" class="content_image lazy" w="374" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/04f0e3687817aaa19aa6cd85ab98b288_hd.jpg">&" dw="426" dh="229" class="origin_image zh-lightbox-thumb lazy" w="426" data-original="https://pic3.zhimg.com/1c8aebebf45a4226f29336aed78504df_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/1c8aebebf45a4226f29336aed78504df_hd.jpg"> 二、ProjectTango
第一代ProjectTango
採用結構光獲取深度信息
&" dw="415" dh="551" class="content_image lazy" w="415" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/124d29f98aa5a2ff005f02cb819a187e_hd.jpg">&" dw="272" dh="204" class="content_image lazy" w="272" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/abc628265ac55284dd67a0621c4a1956_hd.jpg">結構光的Pattern 第二代Project Tango
採用TOF感知深度
At Google I/O 2015, a new Project Tango
&" dw="598" dh="343" class="origin_image zh-lightbox-thumb lazy" w="598" data-original="https://pic2.zhimg.com/2348acfb47302154cb5e6024c3539b88_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/2348acfb47302154cb5e6024c3539b88_hd.jpg">
smartphone development platform powered by a Qualcomm? Snapdragon? 810
processor was announced. The integrated time-of-flight 3D camera from pmd senses depth all
by itself, just by measuring how long it takes for light to shoot out, hit an
object, and come back. As no baseline is required, it can fit in a very
small space, which in turn makes integration into space-constrained phone
designs easier.三、Kinect
&" dw="318" dh="161" class="content_image lazy" w="318" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/5cd93bfd6d0de80a72af024783fc6e1a_hd.jpg"> Kinect V1
採用結構光的方法
&" dw="554" dh="238" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic4.zhimg.com/cb6a3dc3ede70f33063078c960000501_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/cb6a3dc3ede70f33063078c960000501_hd.jpg">&" dw="534" dh="223" class="origin_image zh-lightbox-thumb lazy" w="534" data-original="https://pic2.zhimg.com/bea853b234f0c6df4b923a42f889d27c_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/bea853b234f0c6df4b923a42f889d27c_hd.jpg">Kinect V2
採用TOF方法
&" dw="539" dh="177" class="origin_image zh-lightbox-thumb lazy" w="539" data-original="https://pic2.zhimg.com/fe7b83e785e2c4915d5e06c851f6270b_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/fe7b83e785e2c4915d5e06c851f6270b_hd.jpg">&" dw="554" dh="179" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic2.zhimg.com/c3c585bbd0fbc8794d239ca43a413df3_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/c3c585bbd0fbc8794d239ca43a413df3_hd.jpg">Rather than the coded-light patterns used
by the original Kinect, the new version is reported to use direct time of flight
(TOF) measurement. TOF sensors are
essentially small infrared 「radars」 that instantly create a depth map.四、Intel
RealSense有兩款,都採用結構光獲取深度信息
Intel RealSense Camera(F200)
&" dw="554" dh="217" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic4.zhimg.com/1ecc1d1ca408ee1b299d06b4f415a4a9_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/1ecc1d1ca408ee1b299d06b4f415a4a9_hd.jpg">Intel RealSense Camera(R200)
&" dw="554" dh="206" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic4.zhimg.com/a14ff664d7cb9a9431ef73f297d9554e_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/a14ff664d7cb9a9431ef73f297d9554e_hd.jpg">&" dw="554" dh="205" class="origin_image zh-lightbox-thumb lazy" w="554" data-original="https://pic2.zhimg.com/925bdbe7592f84c5a0b986cb5269e82c_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/925bdbe7592f84c5a0b986cb5269e82c_hd.jpg">Intel RealSense Camera的結構光Pattern
湊個熱鬧,我是外行,可能理解得會更加簡單一些。—————————————————————————————————————圖中藍色標記的為集成深度感測器,負責深度感知(depth sensing)。&" dw="1600" dh="924" class="origin_image zh-lightbox-thumb lazy" w="1600" data-original="https://pic2.zhimg.com/8d6deda5fdab9cc787d202943bfd56b1_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/8d6deda5fdab9cc787d202943bfd56b1_hd.jpg">深度感知一般用來感知周圍的三維物體,但是一般是靜態的。比如檢測物體的表面,障礙物,物體的形狀。&" dw="309" dh="256" class="content_image lazy" w="309" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/506d9824bc7b013da7b079cf5fe07f13_hd.jpg">
因為深度感測器是基於紅外的。而室外可能因為陽光直射大量的紅外光會干擾紅外感測器的工作,所以這套系統比較適合在室內使用。攝像頭通過紅外光來看外界,但紅外光有時候會不準。即使在室內,也會有:比如燃燒的火焰、明亮的白熾燈、甚至從窗外射來的陽光,都會有很強的紅外光。另外還有一些無法反射紅外光的地方,比如黑暗,它不僅不反射紅外光,而且吸收紅外光,所以又需要室內有一定程度光照。所以tango是為靜態的、室內尺度、正常室內照明情況下設計的。它對牆、桌椅、門、天花板都做了很多優化。相關回答:《追蹤設備的使用場景和覆蓋範圍有哪些局限?是否會影響VR可交互的空間的設計?》&" dw="770" dh="355" class="origin_image zh-lightbox-thumb lazy" w="770" data-original="https://pic2.zhimg.com/a0365b5c8fd4cc0959701266c8adf366_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/a0365b5c8fd4cc0959701266c8adf366_hd.jpg">感知深度的方法主要有三種:結構光,就是發射一些紅外點的圖案,來照亮環境或物體的輪廓。然後攝像頭捕捉那些反射的紅外光。計算上面紅外點的大小。大的點就離攝像頭遠,小的點則離攝像頭近(放射:就像投影儀投的越遠圖像越大一樣)而所謂的『深度信息』,就是設備和被它看到的物體的距離。如果有深度信息加上運動追蹤就可以測不在同一幀的兩點之間的距離了。&" dw="336" dh="329" class="content_image lazy" w="336" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/c35f6001c99134d0103984addf1868db_hd.jpg">飛行時間。因為它轉譯紅外光束需要時間,通過時間還有捕捉它的反光。距離短的可能只有幾納秒(光子打一個來回)&" dw="353" dh="336" class="content_image lazy" w="353" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/18f81dc7a82943800cfac89760b293cd_hd.jpg">所謂stereo就是用兩隻眼睛或者兩個攝像頭捕捉的效果。過兩個相機之間看到的圖像差別越大,則圖像離我們越遠。&" dw="334" dh="328" class="content_image lazy" w="334" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/573803e6fb8f4c94f5a848cb3fd8221e_hd.jpg">過鏡頭做一個坐標系XYZ獲得三個坐標的點則為point cloud&" dw="1743" dh="753" class="origin_image zh-lightbox-thumb lazy" w="1743" data-original="https://pic4.zhimg.com/35f280c3a513e9c20d9424a2b9fa95ff_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/35f280c3a513e9c20d9424a2b9fa95ff_hd.jpg">每個點的表面代表那裡離攝像頭的距離。圖上的點越密集,則越細緻&" dw="996" dh="561" class="origin_image zh-lightbox-thumb lazy" w="996" data-original="https://pic3.zhimg.com/d4e6574ae4972fbaf9a4e56b69f912da_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/d4e6574ae4972fbaf9a4e56b69f912da_hd.jpg">比如這個桌面&" dw="969" dh="549" class="origin_image zh-lightbox-thumb lazy" w="969" data-original="https://pic3.zhimg.com/5061e337269c9af1b95f23f357422a80_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/5061e337269c9af1b95f23f357422a80_hd.jpg">這裡它會建立XYZ的坐標系。&" dw="874" dh="549" class="origin_image zh-lightbox-thumb lazy" w="874" data-original="https://pic3.zhimg.com/58a09463e7fd0f9c87e7d5a2385899a7_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/58a09463e7fd0f9c87e7d5a2385899a7_hd.jpg">一聽到『點雲』立馬想到雲服務,事實上本機的性能確實帶不動大的mesh和三維的材質。Point cloud處理point clouds和深度圖像的一些庫:ROS.org | Powering the world"s robotsPCL - Point Cloud Library (PCL)因為這些同時需要設備本身的處理和雲端的服務。—————————————————————————————————————在被捕捉的平面上建平面坐標系IJ&" dw="1348" dh="699" class="origin_image zh-lightbox-thumb lazy" w="1348" data-original="https://pic3.zhimg.com/ba9d74a053fe94d3410a0288ec1139c1_r.jpg" data-actualsrc="//i1.wp.com/pic3.zhimg.com/50/ba9d74a053fe94d3410a0288ec1139c1_hd.jpg">相當於在上面(X,Y,Z)坐標的基礎上加上 Depth map 就有了XYZij。&" dw="999" dh="699" class="origin_image zh-lightbox-thumb lazy" w="999" data-original="https://pic2.zhimg.com/3842e457a39e399976f1c3a170fe3117_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/3842e457a39e399976f1c3a170fe3117_hd.jpg">在(X,Y,Z)坐標上的 point cloud在這裡用一維的點排列表示。在point cloud中,有些區域比較稀疏,甚至有很大的間隙,不能保證相鄰或者空間上相連。&" dw="647" dh="374" class="origin_image zh-lightbox-thumb lazy" w="647" data-original="https://pic2.zhimg.com/4fb2959878f0c228a2464574fd15c8d7_r.jpg" data-actualsrc="//i1.wp.com/pic2.zhimg.com/50/4fb2959878f0c228a2464574fd15c8d7_hd.jpg">IJ則會加上一個Nearest-Neighbor Filter(最近的鄰居濾鏡)填補這些空隙。&" dw="654" dh="374" class="origin_image zh-lightbox-thumb lazy" w="654" data-original="https://pic4.zhimg.com/183c002d9bedbfe31f38dd8190196592_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/183c002d9bedbfe31f38dd8190196592_hd.jpg">https://developers.google.com/project-tango/apis/c/reference/struct/tango-x-y-zijDepth map—————————————————————————————————————然後,就這個可以在這個基礎上建立模型啦。&" dw="529" dh="585" class="origin_image zh-lightbox-thumb lazy" w="529" data-original="https://pic4.zhimg.com/aca80080f53ca46d54e57eeb56e50ec2_r.jpg" data-actualsrc="//i1.wp.com/pic4.zhimg.com/50/aca80080f53ca46d54e57eeb56e50ec2_hd.jpg">
推薦閱讀:
※在線表單工具 Wufoo、Google Form、簡道雲、金數據、麥客等各有什麼優缺點?
※Galaxy Nexus刷哪個ROM比較好用?
※如何評價文章《谷歌柯潔們 ,請別再侮辱圍棋和人類》?
※用 TensorFlow 可以做什麼有意思的事情?
※如何看待 Google 說已經停用 Map Reduce 好多年?
TAG:Kinect | 谷歌Google | 計算機視覺 | ProjectTango | 虛擬現實技術 |