畢業(yè)設(shè)計(jì)(論文) 外文資料翻譯系 別: 機(jī)電信息系 專 業(yè): 機(jī)械設(shè)計(jì)制造及其自動(dòng)化專業(yè) 班 級(jí): 姓 名: 學(xué) 號(hào): 外文出處: 機(jī)器人和計(jì)算機(jī)集成制造 21(2005)368-378 附 件: 1. 原文; 2. 譯文 2013 年 3 月夾具定位規(guī)劃中完整性評(píng)估和修訂CAM 實(shí)驗(yàn)室,機(jī)械工程學(xué)系,伍斯特理工學(xué)院研究院, 100 路,伍斯特,碩士 01609,美國(guó)2004 年 9 月 14 日收稿;2004 年 11 月 9 日修訂;2004 年 11 月 10 日發(fā)表摘 要幾何約束是夾具設(shè)計(jì)中最重要的考慮因素之一。確定位置的解析擬訂已發(fā)達(dá)。然而,如何分析和修改在實(shí)際夾具設(shè)計(jì)實(shí)踐過(guò)程中的一個(gè)非確定性的定位計(jì)劃尚未深入研究。在本文中,提出了一種方法來(lái)描述在限制約束下的重點(diǎn)夾具系統(tǒng)的幾何約束狀態(tài)。一種限制約束下?tīng)顟B(tài),如果它存在,可以識(shí)別給定定位計(jì)劃??梢宰詣?dòng)識(shí)別工件的所有限制約束下約束狀態(tài)的提案。這有助于改善逆差定位計(jì)劃,并為修訂提供指引,以最終實(shí)現(xiàn)確定性的定位。關(guān)鍵詞:夾具設(shè)計(jì);幾何約束;確定性定位;限制約束; 過(guò)約束1.介紹夾具是用于制造工業(yè)進(jìn)行工件牢固定位的一種機(jī)制。在零件加工過(guò)程中規(guī)劃一個(gè)關(guān)鍵的第一步,夾具設(shè)計(jì)需要,以確保定位精度和三維工件的精度。 3-2-1原則,在一般情況下,是最廣泛使用的指導(dǎo)原則發(fā)展的位置計(jì)劃。 V型塊和銷孔定位原則也常用。一個(gè)加工夾具定位方案必須滿足一些要求。最基本的要求是,必須提供工件確定的位置。這種觀點(diǎn)指出,定位計(jì)劃生產(chǎn)的確定位置,工件不能移動(dòng),而至少有一個(gè)定位不會(huì)失去聯(lián)系。這一直是夾具設(shè)計(jì)的最根本的準(zhǔn)則之一,許多研究人員關(guān)于幾何約束狀態(tài)的研究表明,工件在任何定位計(jì)劃分為以下三個(gè)類別:1、良好的約束(確定性):工件在一個(gè)獨(dú)特的位置進(jìn)行配合,工件表面與 6 個(gè)定位器取得聯(lián)系。2、限制約束:不完全約束工件的自由度。3、過(guò)約束:工件自由度超過(guò) 6 定位的制約。在1985年,淺田[1]提出了滿秩為準(zhǔn)則雅可比矩陣的約束方程, 基于分析形成了調(diào)研后,確定定位。周等[2]在1989年制定了在確定性定位問(wèn)題上使用螺旋理論。結(jié)果表明,定位矩陣的定位需要壓力滿秩達(dá)到確定的位置。該方法的確定通過(guò)無(wú)數(shù)的研究。王等[3]考慮定位工件的接觸的影響,而采用點(diǎn)接觸面積。他們介紹了接觸矩陣,并指出,兩個(gè)接觸的機(jī)構(gòu)不應(yīng)該有平等的,但在接觸點(diǎn)曲率相反。卡爾森[4]認(rèn)為,可能沒(méi)有足夠的應(yīng)用,如一些不是非棱柱的表面或相對(duì)誤差近似的非小線性。他提出一個(gè)二階泰勒展開(kāi),其中也考慮到定位誤差相互作用。馬林和費(fèi)雷拉[5]應(yīng)用周對(duì) 3-2-1的位置擬訂,制定若干按照規(guī)則的規(guī)劃。盡管眾多的位置上的確定分析研究很少注意非確定性分析的位置。在淺田的擬定方案中,他們假設(shè)工件夾具元件和點(diǎn)之間的聯(lián)絡(luò)無(wú)阻力。理想的位置q* ,而應(yīng)放置工件表面和分片,可微函數(shù)是 gi(見(jiàn)圖1) 。表面函數(shù)定義為:gi(q*)=0是確定的,應(yīng)該有一個(gè)獨(dú)一無(wú)二的解決方案為下列所有定位方程組。gi(q)=0,i=1,2,.,n (1)其中n是定位器的位置與 方向, 代表了工件的定位和方向。只有考慮到目標(biāo)位置q*附近在 處:淺田表明(2)hi是幾何函數(shù)的雅可比矩陣,矩陣式所示(3) 。確定定位如果雅可比矩陣滿秩,可滿足要求。 (2)只有q=q*一個(gè)解決辦法(3)在1個(gè)3-2-1定位計(jì)劃中,一個(gè)約束方程的雅可比矩陣的滿秩的約束狀態(tài)如表1所示。如果定位是小于6 ,工件是限制約束的,即存在至少有一個(gè)工件自由定位議案不受限制的。如果矩陣滿秩,但定位大于6 定位,工件是過(guò)約束,這表明存在至少一個(gè)定位等;而幾何約束工件被刪除不影響的狀態(tài)。找出一個(gè)模型除了3-2-1,可以建立基準(zhǔn)框架提取等效的定位點(diǎn)。胡等 [6]已經(jīng)發(fā)展出一種系統(tǒng)的方法,對(duì)這個(gè)用途。因此,這則能適用于所有的定位方案。圖1 .夾具系統(tǒng)模型。表 1 等級(jí) 數(shù)量的定位 地位 6 過(guò)分約束康等 [7]遵循這些方法和他們實(shí)施制定的幾何約束分析模塊其自動(dòng)化的計(jì)算機(jī)輔助夾具設(shè)計(jì)的核查制度。他們的 CAFDV 系統(tǒng)可以計(jì)算出雅可比矩陣和它的排名來(lái)確定定位的完整性。它也可以分析工件的位移和靈敏度定位錯(cuò)誤。熊等人 [8]提出的等級(jí)檢查方法的定位矩陣 WL(見(jiàn)附件)。他們還介紹了左/ 右邊的定位矩陣廣義逆理論,分析了工件的幾何誤差。結(jié)果表明,定位及發(fā)展方向誤差 ΔX 和位置誤差 Δr 的工件定位相關(guān)如下:在受限:ΔX=WLΔr, (4)約束:ΔX=(WTLWL)-1WLTΔr, (5)過(guò)分約束:ΔX=WLT(WTLWL)-1Δr+(I6*6-WLT(WTLWL)-1WL) λ, (6)λ是任意一個(gè)向量。他們還介紹了從這些矩陣的幾個(gè)指標(biāo),評(píng)價(jià)定位配置,其次是通過(guò)約束非線性規(guī)劃的優(yōu)化。然而,他們的研究分析,不涉及非確定性定位的修訂。目前,還沒(méi)有就如何處理與提供確定的位置的夾具設(shè)計(jì)系統(tǒng)的研究。2.定位完整性評(píng)價(jià)如果不確定性的位置達(dá)到夾具系統(tǒng)設(shè)計(jì)的要求,設(shè)計(jì)師知道約束狀態(tài)是什么,以如何改善設(shè)計(jì)是非常重要的條件。如果夾具系統(tǒng)是過(guò)度約束,是理想定位需要的不必要的信息。而下約束時(shí),所有有關(guān)知識(shí)約束工件的議案,可以引導(dǎo)設(shè)計(jì)師選擇額外的定位或使得修改定位計(jì)劃更有效。的總體戰(zhàn)略定位計(jì)劃表征幾何約束的狀態(tài)描述圖 2。在本文中,定位矩陣秩的幾何約束的施加評(píng)價(jià)狀態(tài)(見(jiàn)附件為獲得的定位矩陣)。確定需要六個(gè)定位器定位提供矩陣的滿秩定位 WL:如圖 3 所示,在給定的定位器數(shù)量 n,定位法向量[ai,bi,ci]和定位的位置[xi,yi,zi]每一個(gè)定位器,i=1,2,.,n,n*6 定位矩陣可以確定如下 :(7)當(dāng)?shù)燃?jí)(WL)=6,n=6 時(shí),是工件良好約束。當(dāng)?shù)燃?jí)(WL)=6,n6 時(shí);是工件過(guò)約束。這意味著(n-6)有不必要的定位在定位方案上。工件將不存在限制(n-6)定位器。這種狀態(tài)的數(shù)學(xué)表示方法,那就是(n-6)在定位向量矩陣,可表示為線性組合的其他六行向量。圖 2 幾何約束狀態(tài)描述圖 3 一個(gè)簡(jiǎn)化的定位方案。定位方案,提供了確定性的位置。發(fā)達(dá)國(guó)家的算法使用下列方法確定不必要的定位:1、找到所有的( n-6)組合定位的。2、為每個(gè)組合 ,從(n-6 )定位器確定定位方案。3、重新計(jì)算矩陣秩的定位為左六個(gè)定位器。4、如果等級(jí)不變 ,被刪除的(n-6)定位器是負(fù)責(zé)過(guò)約束狀態(tài)。這種方法可能會(huì)產(chǎn)生多種解決方案,并要求設(shè)計(jì)師來(lái)決定哪一套不必要的定位應(yīng)該被刪除以最佳定位性能。當(dāng)?shù)燃?jí)(WL)6,工件的限制約束。3。算法的開(kāi)發(fā)和實(shí)施在這里待開(kāi)發(fā)的算法,將致力于提供信息的不受限運(yùn)動(dòng)工件在不足的約束狀態(tài)。假設(shè)有 n 個(gè)定位器之間的關(guān)系的工件的位置/ 定向誤差和定位誤差可以表示為如下其中,DX; DY,DZ,AX,AY,AZ 沿 X,Y,Z 軸和 X,Y,Z 軸的旋轉(zhuǎn),分別是位移。直接還原鐵第 i 個(gè)定位器的幾何誤差。 WIJ 的定義是正確的廣義逆的定位矩陣 WR?WTLeWLWTL為了找出所有未受限運(yùn)動(dòng)的工件,V =dxi ; dyi ; dzi; daxi; dayi; dazi 介紹了 V DX = 0.由于 rank(△X)<6 必須存在有非零 V 滿足式,每個(gè)非零的解決方案的 V 代表一個(gè)無(wú)約束運(yùn)動(dòng)。每學(xué)期的 V 代表該運(yùn)動(dòng)的一個(gè)組成部分。例如,[0; 0; 0; 3; 0; 0] 說(shuō)繞 x軸的旋轉(zhuǎn)不約束裝置 ,[0; 1; 1; 0; 0; 0 ] 工件可以沿著由下式給出的方向向量[0; 1; 1 ] 有可能是無(wú)限的解決方案。解空間,然而,可以構(gòu)造 6- rank(WL)基本的解決方案,致力于以下分析,找出基本的解決方案。示出,Wr 的行向量之間的依賴關(guān)系:在特殊情況下,例如,所有 W1J 等于零,V 具有一個(gè)明顯的解決方案,[1,0 ,0,0,0,0],表示沿 x 軸的位移還沒(méi)有限制。這是很容易理解,因?yàn)椤?0 在此情況下,這意味著相應(yīng)的工件的位置誤差是不依賴任何定位錯(cuò)誤。因此,相關(guān)的動(dòng)議未約束的定位器。此外,結(jié)合動(dòng)議不約束,如果是△X 的元素之一,可以作為其他元素的線性組合表示。然而,它可以移動(dòng)向量定義的 x-和 y-軸之間的沿對(duì)角線為了找到解決辦法一般情況下,以下策略:1. 在定位矩陣消除依賴的行(S) 。2。計(jì)算 6 不正確的修改后的定位矩陣的廣義逆34 規(guī)范的自由運(yùn)動(dòng)空間。5 計(jì)算未定的 V6. 基于該算法,一個(gè) C ++程序的目的是為了查明受限的狀態(tài)下,不受約束的運(yùn)動(dòng)。實(shí)施例 1。在一個(gè)表面的磨削操作中,位于一個(gè)工件的夾具系統(tǒng)上,如示于圖。正常矢量和每個(gè)定位器的位置如下:因此,定位矩陣被確定。在有限的定位方案這種定位系統(tǒng)提供了根據(jù)有限的定位因?yàn)?rank(WL)=5<6,該程序,然后計(jì)算正確的定位矩陣的廣義逆第一行是公認(rèn)的依賴行,因?yàn)檫@一行的去除不影響矩陣的秩。 “其他五排是獨(dú)立的行。發(fā)現(xiàn)根據(jù)獨(dú)立的行的線性組合規(guī)定下約束狀態(tài)的程序的步驟 5。這種特殊情況下的解決方案是顯而易見(jiàn)的,所有系數(shù)均為零。因此,所述 un-約束運(yùn)動(dòng)的工件可以被確定為 V=[100000]這表明,工件可沿 x 方向移動(dòng)?;谶@個(gè)結(jié)果,一個(gè)額外的定位器應(yīng)該是采用約束沿 x 軸的工件位移。實(shí)施例 2。圖 5 示出了鉸接 3-2-1 定位系統(tǒng)。的法線矢量和每個(gè)定位器的位置,在這最初的設(shè)計(jì)如下:這種配置的定位矩陣是610 真正的設(shè)計(jì)修改修改定位矩陣變?yōu)樾薷暮蟮亩ㄎ痪仃囀钦_的廣義逆檢查的程序依賴行,每一行是依賴其它五個(gè)行。不失概括性的,第一行被視為依賴行。 5×5 改進(jìn)的逆矩陣根據(jù)第 5 步中,計(jì)算五個(gè)未確定的 V 條件該矢量表示的位移的組合定義的自由運(yùn)動(dòng),沿[1,0,1.713]方向結(jié)合旋轉(zhuǎn)[0.0432,0.0706,0.04]。要修改這個(gè)定位的配置,另一種定位器被添加到限制這種自由運(yùn)動(dòng)的工件,假設(shè)定位 L1 刪除在步驟 1 中。該程序可以也算自由運(yùn)動(dòng)的工件,如果一個(gè)定位器以外 L1 刪除在步驟 1 中。這提供了多的設(shè)計(jì)師的修訂選項(xiàng)。4.總結(jié)確定性的位置是一個(gè)重要的要求夾具定位方案設(shè)計(jì)。分析標(biāo)準(zhǔn)決定性的地位已經(jīng)確立。為了進(jìn)一步研究非確定性狀態(tài),提出了一種用于檢查幾何約束的狀態(tài)已經(jīng)研制成功。該算法可以識(shí)別欠約束狀態(tài),并指示不受限運(yùn)動(dòng)的工件。它也承認(rèn)過(guò)約束的狀態(tài)和不必要的定位器。輸出信息,可以幫助設(shè)計(jì)師來(lái)分析和改進(jìn)現(xiàn)有的定位方案。參考文獻(xiàn)[1] Asada H, By AB.。自動(dòng)重構(gòu)夾具的柔性裝配夾具的運(yùn)動(dòng)學(xué)分析。 IEEE J 機(jī)器人 autom1985; RA-1:86-93。[2] zhou YC, Chandru V,Barash MM。加工裝置的自動(dòng)配置的數(shù)學(xué)方法分析和綜合。反 ASME J 英工業(yè) 1989;111:299-306。[3] Wang MY, Liu T, Pelinescu DM.。夾具運(yùn)動(dòng)學(xué)分析的基礎(chǔ)上充分接觸剛體模型。 J 制造業(yè)│科學(xué)與工程 2003;125:316-24。[4] Carlson JS。剛性零件的裝夾和定位計(jì)劃的二次靈敏度分析“。 ASME J 制造業(yè)│ 2001 年科學(xué)與工程;123(3):462-72。[5] Marin R, Ferreira P.確定性 3-2-1 定位計(jì)劃的運(yùn)動(dòng)學(xué)分析和綜合加工裝置。 ASME J 制造業(yè) │科學(xué)與工程 2001 年;123:708-19。[6] Hu W.設(shè)置規(guī)劃和公差分析。博士論文中,伍斯特理工學(xué)院;2001 年。[7] Kang Y, Rong Y, Yang J, Ma W.計(jì)算機(jī)輔助夾具設(shè)計(jì)驗(yàn)證。大會(huì)Autom2002;22:350-9。[8] Rong KY, Huang SH, Hou Z.先進(jìn)的計(jì)算機(jī)輔助夾具設(shè)計(jì)。波士頓:愛(ài)思唯爾;2005 年。 本科畢業(yè)設(shè)計(jì)論文題 目 外筒襯套零件的工藝規(guī)程和夾具設(shè)計(jì)專業(yè)名稱 學(xué)生姓名 指導(dǎo)教師 畢業(yè)時(shí)間 畢業(yè)設(shè)計(jì)論文任務(wù)書(shū)一、題目外筒襯套零件的工藝規(guī)程和夾具設(shè)計(jì)二、指導(dǎo)思想和目的要求畢業(yè)設(shè)計(jì)(論文)是培養(yǎng)學(xué)生自學(xué)能力、綜合應(yīng)用能力、獨(dú)立工作能力的重要教學(xué)實(shí)踐環(huán)節(jié)。在畢業(yè)設(shè)計(jì)中,學(xué)生應(yīng)獨(dú)立承擔(dān)一部分比較完整的工程技術(shù)設(shè)計(jì)任務(wù)。要求學(xué)生發(fā)揮主觀能動(dòng)性,積極性和創(chuàng)造性,在畢業(yè)設(shè)計(jì)中著重培養(yǎng)獨(dú)立工作能力和分析解決問(wèn)題的能力,嚴(yán)謹(jǐn)踏實(shí)的工作作風(fēng),理論聯(lián)系實(shí)際,以嚴(yán)謹(jǐn)認(rèn)真的科學(xué)態(tài)度,進(jìn)行有創(chuàng)造性的工作,認(rèn)真、按時(shí)完成任務(wù)。三、主要技術(shù)指標(biāo)1、零件圖一張;2、毛坯圖一張;3、工藝規(guī)程一本;4、工藝裝備(夾具 1-2 套) ;5、說(shuō)明書(shū)一份四、進(jìn)度和要求1、分析并繪制零件圖 2 周2、繪制毛坯圖 1 周3、設(shè)計(jì)工藝路線及編制工藝規(guī)程 5 周4、設(shè)計(jì)工藝裝備 4 周5、編寫(xiě)說(shuō)明書(shū)(論文) 2 周五、主要參考書(shū)及參考資料1、閆光明主編 《 現(xiàn)代制造工藝基礎(chǔ) 》 西北工業(yè)大學(xué)出版社 20072、哈工大李益民主編 《機(jī)械制造工藝設(shè)計(jì)簡(jiǎn)明手冊(cè)》 機(jī)械工業(yè)出版社,1994產(chǎn) 品 型 號(hào) QAI 9-4 共 頁(yè) 工 序 目 錄零 組 件 號(hào) YB458-71 第 頁(yè)工序號(hào)工 序 名 稱 設(shè) 備 工序卡片數(shù) 附 注5 備料 110 車(chē) C620 115 車(chē) C620 120 粗銑 6H11 125 鏜孔 C620 130 車(chē)外圓 C620 135 銑外形 6H11 140 車(chē)外圓 C620 145 攻螺紋 CH12A 150 銑圓弧 6H11 155 修銼 鉗工臺(tái) 160 研磨孔 研磨頭 165 檢驗(yàn) 檢驗(yàn)臺(tái) 170 磨 3153 175 車(chē) C620 180 去毛刺 鉗工臺(tái) 185 研磨孔 研磨頭 190 檢驗(yàn) 檢驗(yàn)臺(tái) 195 鈍化 1100 檢驗(yàn) 檢驗(yàn)臺(tái) 1零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI 9-4 備料 5設(shè) 備 鋸床 定 位 夾 緊 共 頁(yè) 第 頁(yè)備料 50*31 0.5序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI 9-4 車(chē) 10設(shè) 備 C620 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具1 三爪卡盤(pán) 鉆頭 塞規(guī)卡規(guī)零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI9-4 車(chē) 15設(shè) 備 C620 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具1 三爪卡盤(pán) 卡規(guī)零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI9-4 粗銑 20設(shè) 備 6H11 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具2d326048靠模銑刀轉(zhuǎn)盤(pán) B-4x6零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI-94 鏜孔 25設(shè) 備 C620 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI9-4 車(chē)外圓 30設(shè) 備 C620 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI9-4 銑外形 35設(shè) 備 6H11 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片外筒襯套 QAI-94 車(chē)外圓 40設(shè) 備 C620 定 位 夾 緊 共 1 頁(yè) 第 1 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具零 件 名 稱 材 料 硬 度 工序名稱 工 序 號(hào)工 序 卡 片設(shè) 備 定 位 夾 緊 共 頁(yè) 第 頁(yè)序 號(hào) 加 工 要 求 說(shuō) 明 夾 具 刀 具 量 具畢業(yè)(設(shè)計(jì))論文開(kāi) 題 報(bào) 告系 別 機(jī)電工程系 專 業(yè) 機(jī)械設(shè)計(jì)制造及其自動(dòng)化班 級(jí)學(xué)生姓名學(xué) 號(hào)指導(dǎo)教師報(bào)告日期畢業(yè)(設(shè)計(jì))論文開(kāi)題報(bào)告表論文題目 外筒襯套工藝及車(chē)床心軸銑床靠模夾具設(shè)計(jì)學(xué)生姓名題目來(lái)源(劃√) 科研□ 生產(chǎn) ? 實(shí)驗(yàn)室□ 專題研究□論文類型(劃√) 設(shè)計(jì)□ 論文 ? 其 他 □1、選題的意義外筒襯套零件在產(chǎn)品中一端與液壓助力器外筒組件連接,中間孔與活塞外圓配合,起到一端的支撐作用。因此在機(jī)器中有很重要的作用,在生產(chǎn)中會(huì)大量生產(chǎn)所以完成這次選題有以下幾點(diǎn)意義:1.能使我了解外筒襯套的特點(diǎn)和工作原理.2.能幫我掌握外筒襯套的工藝過(guò)程令設(shè)計(jì)出來(lái)的零件符合精度及表面粗糙度等各方面工藝要求。3.通過(guò)對(duì)所用夾具進(jìn)行合理設(shè)計(jì)來(lái)滿足工藝要求,從而將自己的理論知識(shí)與實(shí)際相結(jié)合。4.通過(guò)此次選題可以提高學(xué)生的自主設(shè)計(jì)能力進(jìn)一步滿足企業(yè)對(duì)畢業(yè)生的能力要求。二、基本內(nèi)容及重點(diǎn)1.零件的的分析(1)零件的作用(2)零件的工藝分析2.工藝規(guī)程設(shè)計(jì)(1)確定毛坯制造形式(2)定位基準(zhǔn)的選擇(3)擬定工藝路線(4)選擇加工設(shè)備及刀具夾具量具(5)機(jī)械加工余量工序尺寸及毛坯尺寸的確定(6)確定切削用量及基本時(shí)間3.專用夾具設(shè)計(jì)(1)機(jī)床夾具概述(2)定位基準(zhǔn)選擇(3)切削力和卡緊力計(jì)算(4)問(wèn)題的提出(5)夾具設(shè)計(jì)(6)夾具設(shè)計(jì)中的特點(diǎn)(7)夾具分類(8)夾具設(shè)計(jì)技的發(fā)展(9)夾具的基礎(chǔ)件(10)設(shè)計(jì)方法和步驟4.說(shuō)明書(shū) 三、預(yù)期達(dá)到的成果設(shè)計(jì)的外筒襯套工藝過(guò)程合理,設(shè)計(jì)出來(lái)的零件工藝規(guī)程滿足圖紙實(shí)際尺寸,表面粗糙度,加工精度等各方面要求。夾具設(shè)計(jì)合理符合工藝要求。四、存在的問(wèn)題及擬采取的解決措施存在問(wèn)題:1.對(duì)利用 CAD 軟件制圖不熟練。2.對(duì)零件圖的理解不夠充分。3.對(duì)零件加工工藝過(guò)程方面知識(shí)理解不夠深刻解決方法:1. 認(rèn)真復(fù)習(xí)對(duì) CAD 軟件的運(yùn)用。2. 關(guān)于對(duì)零件圖不理解的地方請(qǐng)教老師和同學(xué)。3. 認(rèn)真學(xué)習(xí)零件加工工藝過(guò)程反面知識(shí)彌補(bǔ)自己的不足。五、進(jìn)度安排1、 分析并繪制零件圖 1 周2、 繪制毛坯圖 1 周3、 設(shè)計(jì)工藝路線及編制工藝規(guī)程 4 周4、 設(shè)計(jì)工藝裝備 3 周5、 編寫(xiě)說(shuō)明書(shū)(論文) 2 周六、參考文獻(xiàn)和書(shū)目[1] 王先逵編著.機(jī)械制造工藝學(xué)[M].北京:清華大學(xué)出版社,1989[2] 鄧文英、宋力宏.金屬工藝學(xué) 第五版 北京:高等教育出版社.2008.4[3] 趙志修主編.機(jī)械制造工藝學(xué)[M].北京機(jī)械工業(yè)出版社,1985[4] 肖繼德.機(jī)床夾具設(shè)計(jì)[M].北京:北京機(jī)械工業(yè)出版社,2005[5] 關(guān)慧貞、馮辛安.機(jī)械制造裝備設(shè)計(jì)(第三版).北京:機(jī)械工業(yè)出版社,2009[6] 孫麗媛.機(jī)械制造工藝及專用夾具設(shè)計(jì)指導(dǎo)[M].北京:冶金工業(yè)出版社,2007[7] 楊叔子.機(jī)械加工工藝師手冊(cè)[M].北京:機(jī)械工業(yè)出版社,2001[8] 朱耀祥,蒲林祥.現(xiàn)代夾具手冊(cè)[M].北京:機(jī)械工業(yè)出版社,2010[9] 吳宗澤,高志.機(jī)械設(shè)計(jì)(第二版).北京:高等教育出版社.2009導(dǎo)師意見(jiàn)指導(dǎo)教師簽字: 年 月 日 系意見(jiàn) 系主任簽字: 年 月 日注:內(nèi)容用小四,宋體 Robot companion localization at home and in the officeArnoud Visser J¨urgen Sturm Frans GroenIntelligent Autonomous Systems, Universiteit van Amsterdamhttp://www.science.uva.nl/research/ias/AbstractThe abilities of mobile robots depend greatly on the performance of basic skills such asvision and localization. Although great progress has been made to explore and map extensivepublic areas with large holonomic robots on wheels, less attention is paid on the localizationof a small robot companion in a confined environment as a room in office or at home. Inthis article, a localization algorithm for the popular Sony entertainment robot Aibo inside aroom is worked out. This algorithm can provide localization information based on the naturalappearance of the walls of the room. The algorithm starts making a scan of the surroundings byturning the head and the body of the robot on a certain spot. The robot learns the appearanceof the surroundings at that spot by storing color transitions at different angles in a panoramicindex. The stored panoramic appearance is used to determine the orientation (including aconfidence value) relative to the learned spot for other points in the room. When multiplespots are learned, an absolute position estimate can be made. The applicability of this kind oflocalization is demonstrated in two environments: at home and in an office.1 Introduction1.1 ContextHumans orientate easily in their natural environments. To be able to interact with humans, mobilerobots also need to know where they are. Robot localization is therefore an important basic skillof a mobile robot, as a robot companion like the Aibo. Yet, the Sony entertainment softwarecontained no localization software until the latest release1. Still, many other applications for arobot companion - like collecting a news paper from the front door - strongly depend on fast,accurate and robust position estimates. As long as the localization of a walking robot, like theAibo, is based on odometry after sparse observations, no robust and accurate position estimatescan be expected.Most of the localization research with the Aibo has concentrated on the RoboCup. At theRoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localizationaccuracies below six centimeters [6, 8].The price that these RoboCup approaches pay is their total dependency on artificial landmarksof known shape, positions and color. Most algorithms even require manual calibration of the actualcolors and lighting conditions used on a field and still are quite susceptible for disturbances aroundthe field, as for instance produced by brightly colored clothes in the audience.The interest of the RoboCup community in more general solutions has been (and still is) growingover the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example ofthe state-of-the-art in this community. For this challenge additional landmarks with bright colorsare placed around the borders on a RoboCup field. The robots get one minute to walk around andexplore the field. Then, the normal beacons and goals are covered up or removed, and the robotmust then move to a series of five points on the field, using the information learnt during the first1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation2RoboCup Four Legged League homepage, last accessed in May 2006, http://www.tzi.de/4legged3Details about the Simultaneous Localization and Mapping challenge can be found at http://www.tzi.de/4legged/pub/Website/Downloads/Challenges2005.pdf1minute. The winner of this challenge [6] reached the five points by using mainly the information ofthe field lines. The additional landmarks were only used to break the symmetry on the soccer field.A more ambitious challenge is formulated in the newly founded RoboCup @ Home league4. Inthis challenge the robot has to safely navigate toward objects in the living room environment. Therobot gets 5 minutes to learn the environment. After the learning phase, the robot has to visit 4distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes.1.2 Related WorkMany researchers have worked on the SLAM problem in general, for instance on panoramic images[1, 2, 4, 5]. These approaches are inspiring, but only partially transferable to the 4-Legged league.The Aibo is not equipped with an omni-directional high-quality camera. The camera in the nosehas only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further,the horizon in the images is not a constant, but depends on the movements of the head and legs ofthe walking robot. So each image is taken from a slightly different perspective, and the path of thecamera center is only in first approximation a circle. Further, the images are taken while the headis moving. When moving at full speed, this can give a difference of 5.4 degrees between the top andthe bottom of the image. So the image seems to be tilted as a function of the turning speed of thehead. Still, the location of the horizon can be calculated by solving the kinematic equations of therobot. To process the images, a 576 Mhz processor is available in the Aibo, which means that onlysimple image processing algorithms are applicable. In practice, the image is analyzed by followingscan-lines with a direction relative the calculated horizon. In our approach, multiple sectors abovethe horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One ofthe general approaches [3] divides the image in multiple sectors, but this image is omni-directionaland the sector is analyzed on the average color of the sector. Our method analysis each sector ona different characteristic feature: the frequency of colortransitions.2 ApproachThe main idea is quite intuitive: we would like the robot to generate and store a 360o circularpanorama image of its environment while it is in the learning phase. After that, it should aligneach new image with the stored panorama, and from that the robot should be able to derive itsrelative orientation (in the localization phase). This alignment is not trivial because the new imagecan be translated, rotated, stretched and perspectively distorted when the robot does not stand atthe point where the panorama was originally learned [11].Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolutionimages. Therefore a reduced feature space is designed so that the computations becometractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned.Figure 1 gives a quick overview of the algorithm’s main components.The Aibo performs a calibration phase before the actual learning can start. In this phase theAibo first decides on a suitable camera setting (i.e. camera gain and the shutter setting) basedon the dynamic range of brightness in the autoshutter step. Then it collects color pixels byturning its head for a while and finally clusters these into 10 most important color classes in thecolor clustering step using a standard implementation of the Expectation-Maximization algorithmassuming a Gaussian mixture model [9]. The result of the calibration phase is an automaticallygenerated lookup-table that maps every YCbCr color onto one of the 10 color classes and cantherefore be used to segment incoming images into its characteristic color patches (see figure 2(a)).These initialization steps are worked out in more detail in [10].4RoboCup @ Home League homepage, last accessed in May 2006, http://www.ai.rug.nl/robocupathome/5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily process imagesat the full Aibo frame rate (30fps).Figure 1: Architecture of our algorithm(a) Unsupervised learned color segmentation.(b) Sectors and frequent color transitionsvisualized.Figure 2: Image processing: from the raw image to sector representation. This conversion consumesapproximately 6 milliseconds/frame on a Sony Aibo ERS7.2.1 Sector signature correlationEvery incoming image is now divided into its corresponding sectors6. The sectors are located abovethe calculated horizon, which is generated by solving the kinematics of the robot. Using the lookuptable from the unsupervised learned color clustering, we can compute the sector features by countingper sector the transition frequencies between each two color classes in vertical direction. This yieldsthe histograms of 10x10 transition frequencies per sector, which we subsequently discretize into 5logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for eachsector. Some sectors have multiple color transitions in the most frequent bin, other sectors have asingle or no dominant color transition. This is only visualization; not only the most frequent colortransitions, but the frequency of all 100 color transitions are used as characteristic feature of thesector.In the learning phase we estimate all these 80x(10x10) distributions7 by turning the head andbody of the robot. We define a single distribution for a currently perceived sector byPcurrent (i, j, bin) =_1 discretize (freq (i, j)) = bin0 otherwise(1)where i, j are indices of the color classes and bin one of the five frequency bins. Each sector isseen multiple times and the many frequency count samples are combined into a distribution learned680 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between10 and 12 sectors per image (depending on the head pan/tilt)7When we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10colors)x(5 bins)x(2 byte) = 80 KB of memory.for that sector by the equation:Plearned (i, j, bin) = Pcountsector (i, j, bin)bin2frequencyBinscountsector (i, j, bin)(2)After the learning phase we can simply multiply the current and the learned distribution to getthe correlation between a currently perceived and a learned sector:Corr(Pcurrent, Plearned) =Yi,j2colorClasses,bin2frequencyBinsPlearned (i, j, bin) ·Pcurrent (i, j, bin) (3)2.2 AlignmentAfter all the correlations between the stored panorama and the new image signatures were evaluated,we would like to get an alignment between the stored and seen sectors so that the overall likelihoodof the alignment becomes maximal. In other words, we want to find a diagonal path with theminimal cost through the correlation matrix. This minimal path is indicated as green dots in figure3. The path is extended to a green line for the sectors that are not visible in the latest perceivedimage.We consider the fitted path to be the true alignment and extract the rotational estimate 'robotfrom the offset from its center pixel to the diagonal (_sectors):?'robot =360_80_sectors (4)This rotational estimate is the difference between the solid green line and the dashed white linein figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again apath through the correlation matrix far away from the best-fitted path.SNR =P(x,y)2minimumPathCorr(x, y)P(x,y)2noisePathCorr(x, y)(5)The noise path is indicated in figure 3 with red dots.(a) Robot standing on the trained spot (matchingline is just the diagonal)(b) Robot turned right by 45 degrees (matchingline displaced to the left)F igure 3: Visualization of the alignment step while the robot is scanning with its head. Thegreen solid line marks the minimum path (assumed true alignment) while the red line marks thesecond-minimal path (assumed peak noise). The white dashed line represents the diagonal, whilethe orange halter illustrates the distance between the found alignment and the center diagonal(_sectors).2.3 Position Estimation with Panoramic LocalizationThe algorithm described in the previous section can be used to get a robust bearing estimatetogether with a confidence value for a single trained spot. As we finally want to use this algorithmto obtain full localization we extended the approach to support multiple training spots. Themain idea is that the robot determines to which amount its current position resembles with thepreviously learned spots and then uses interpolation to estimate its exact position. As we thinkthat this approach could also be useful for the RoboCup @ Home league (where robot localizationin complex environments like kitchens and living rooms is required) it could become possible thatwe finally want to store a comprehensive panorama model library containing dozens of previouslytrained spots (for an overview see [1]).However, due to the computation time of the feature space conversion and panorama matching,per frame only a single training spot and its corresponding panorama model can be selected.Therefore, the robot cycles through the learned training spots one-by-one. Every panorama modelis associated with a gradually changed confidence value representing a sliding average on the confidencevalues we get from the per-image matching.After training, the robot memorizes a given spot by storing the confidence values received fromthe training spots. By comparing a new confidence value with its stored reference, it is easy todeduce whether the robot stands closer or farther from the imprinted target spot.We assume that the imprinted target spot is located somewhere between the training spots.Then, to compute the final position estimate, we simply weight each training spot with its normalizedcorresponding confidence value:positionrobot =XipositioniPconfidenceij confidencej(6)This should yield zero when the robot is assumed to stand at the target spot or a translationestimate towards the robot’s position when the confidence values are not in balance anymore.To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged fieldin our robolab. The spots were located along the axes approximately 1m away from the center.As target spot, we simply chose the center of the field. The training itself was performed fullyautonomously by the Aibo and took less than 10 minutes. After training was complete, the Aibowalked back to the center of the field. We recorded the found position and kidnapped the robot toan arbitrary position around the field and let it walk back again.Please be aware that our approach for multi-spot localization is at this moment rather primitiveand has to be only understood as a proof-of-concept. In the end, the panoramic localization datafrom vision should of course be processed by a more sophisticated localization algorithm, like aKalman or particle filter (last not least to incorporate movement data from the robot).3 Results3.1 EnvironmentsWe selected four different environments to test our algorithm under a variety of circumstances. Thefirst two experiments were conducted at home and in an office environment8 to measure performanceunder real-world circumstances. The experiments were performed on a cloudy morning, sunnyafternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory.Even more challenging, we took an Aibo outdoors (see [7]).3.2 Measured resultsFigure 4(a) illustrates the results of a rotational test in a normal living room. As the error in therotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment ofa single sector9; moreover, the size of the confidence interval can be translated into maximal twosectors, which corresponds to the maximal angular resolution of our approach.8XX office, DECIS lab, Delft9full circle of 3600 divided by 80 sectors(a) Rotational test in natural environment (livingroom, sunny afternoon)(b) Translational test in natural environment (child’sroom, late in the evening)Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotationalexperiment on the left the robot is rotated over 90 degrees on the same spot, and every 5 degrees itsorientation is estimated. The robot is able to find its true orientation with an error estimate equalto one sector of 4.5 degrees. The translational test on the right is performed in a child’s room. Therobot is translated over a straight line of 1.5 meter, which covers the major part of the free spacein this room. The robot is able to maintain a good estimate of its orientation; although the errorestimate increases away from the location where the appearance of the surroundings was learned.Figure 4(b) shows the effects of a translational dislocation in a child’s room. The robot wasmoved onto a straight line back and forth through the room (via the trained spot somewhere in themiddle). The robot is able to estimate its orientation quite well on this line. The discrepancy withthe true orientation is between +12.1 and -8.6 degrees, close to the walls. This is also reflected inthe computed confidence interval, which grows steadily when the robot is moved away from thetrained spot. The results are quite impressive for the relatively big movements in a small room andthe resulting significant perspective changes in that room.Figure 5(a) also stems from a translational test (cloudy morning) which has been conducted inan office environment. The free space in this office is much larger than at home. The robot wasmoved along a 14m long straight line to the left and right and its orientation was estimated. Notethe error estimate stays low at the right side of this plot. This is an artifact which nicely reflectsthe repetition of similarly looking working islands in the office.In both translational tests it can be seen intuitively that the rotation estimates are withinacceptable range. This can also be shown quantitatively (see figure 5(b)): both the orientationerror and the confidence interval increase slowly and in a graceful way when the robot is movedaway from the training spot.Finally, figure 6 shows the result of the experiment to estimate the absolute position with multiplelearned spots. It can be seen that the localization is not as accurate as traditional approaches,but can still be useful for some applications (bearing in mind that no artificial landmarks are required).We recorded repeatedly a derivation to the upper right that we think can be explained bythe fact that different learning spots don’t produce equally strong confidence values; we believe tobe able to correct for that by means of confidence value normalization in the near future.4 ConclusionAlthough at first sight the algorithm seems to rely on specific texture features of the surroundingsurfaces, in practice no dependency could be found. This can be explained by two reasons: firstly, asthe (vertical) position of a color transition is not used anyway, the algorithm is quite robust against(vertical) scaling. Secondly, as the algorithm aligns on many color transitions in the background(typically more than a hundred in the same sector), the few color transitions produced by objectsin the foreground (like beacons and spectators) have a minor impact on the match (because theirsizes relative to the background are comparatively small).The lack of an accurate absolute position estimates seems to be a clear drawback with respect tothe other methods, but bearing information alone can already be very useful for certain applications.(a) Translational test in natural environment (office,cloudy morning)(b) Signal degradation as a function of the distance tothe learned spot (measured in the laboratory)Figure 5: Challenging orientation results. On the left a translational test in office environmentover 14 meters along a line 80 centimeters from the learned spot (only one). A translation tothe left of the office increases the error estimate increases, as expected. When translating to theright of the office to the orientation estimate oscillates, but the error estimate stays low. This isdue to repeating patterns in the office, after 4 meters there is another group of desks and chairswhich resem