「遞歸」和「迭代」有哪些區別?
遞歸:
】】】】】】】】】...】】】】】】】】】
】】】】】】】】】】】】】】】】】】】】】...】】】】】】】】】迭代:
】】】】】】】】】】】】
...】】】】一點拙見。電影故事例證:迭代——《明日邊緣》遞歸——《盜夢空間》
迭代是將輸出做為輸入,再次進行處理。比如將攝像頭對著顯示器;比如鏡子對著鏡子;比如KTV中將麥克對著音箱;比如機關槍扣動扳機發射子彈後,利用后座力繼續扣動扳機。
用程序表述就是:for (int i=0; i &< 100; i++) n = f(n);比如下面這段生成分形圖像的代碼:
Shadertoy BETAvec2 center = vec2(0.8,0);
float blue = 0.3;
void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
vec2 z, c;
float scale = 20.0/iGlobalTime; // zoom in
vec2 uv = fragCoord.xy / iResolution.xy;
c.x = 1.3333 * (uv.x - 0.5) * scale - center.x;
c.y = (uv.y - 0.5) * scale - center.y;
z = c;
vec4 col = vec4(0,0,0,0);
for(int i=0; i&<100; i++) {
float x = (z.x * z.x - z.y * z.y) + c.x;
float y = (z.y * z.x + z.x * z.y) + c.y;
blue += 0.028;
if((x * x + y * y) &> 4.0) {
col = vec4(0,0,blue,0);
break;
}
z.x = x;
z.y = y;
}
fragColor = col;
}
再給迭代舉個通俗點的例子:假如你有一條哈士奇和一條中華田園犬,怎麼讓它們串出比較純正的哈士奇呢?先讓哈士奇與中華田園犬配對,生下小狗。再讓哈士奇與小狗配對,當然要等小狗長大後。就這樣一直讓哈士奇與新生的小狗配對,一代一代地迭,最終你能得到比較純正的哈士奇。如果你糾結貓三狗四,豬五羊六,牛七馬八這樣的自然規律,不妨把兩條狗改為老鼠與寵物倉鼠,他們一個月就能迭代一次。
遞歸,簡講就是自己調用自己,自己包含自己。用程序表述就是:void f(int n){f(n - 1);}不要在意這是死循環代碼,只需知道這個函數中,又調用了函數自身,屬於自己調用自己。比如,顯示器中的顯示器,鏡子中的鏡子。我前面寫著:攝像頭對著顯示器,鏡子對著鏡子是迭代,怎麼現在又改成遞歸了?這不矛盾,因為攝像頭對著顯示器,鏡子對著鏡子這種行為是輸出做為輸入,再次進行處理,所以是迭代。顯示器中的顯示器,鏡子中的鏡子這種效果是自己包含自己,所以是遞歸。如同上面那幅圖像,生成它的代碼是迭代,而分形的效果是遞歸。迭代是更新變數的舊值。遞歸是在函數內部調用自身
遞歸是一個樹結構,每個分支都探究到最遠,發現無法繼續的時候往回走
每個節點只會訪問一次
迭代是一個環結構,每次迭代都是一個圈,不會拉掉其中的某一步,然後不斷循環每個節點都會被循環訪問以前看過一段英文的解釋兩者的不同,現在沒有搜到原文。大意是遞歸過程中, 問題的規模在縮小,這樣最終得到問題的解;而迭代是一種由遠變近的逼近,問題的規模不見得縮小了,但是慢慢在調整接近答案。遞歸求解n的階乘過程,非常符合這個描述;而數值分析課程里的許多方法,比如牛頓迭代法,也是符合這個描述的。
二者相互關係
1. 從計算機角度講,遞歸是迭代的特例。
這個例子是兩種方式計算階乘的javascript代碼實現,可以在瀏覽器中,按F12調出控制台,在控制台中進行實驗。
// 迭代,重複一定的演算法,達到想要的目的。數學上二分法,牛頓法是很好的迭代例子
function iteration(x){
var sum=1;
for(x; x&>=1; x--){
sum = sum*x;
}
}
// 遞歸,自身調用自身的迭代就是遞歸。
// 但是正式定義好像不是這麼說的。這只是我個人理解
function recursion(x){
if(x&>1){
return x*recursion(x-1);
}else{
return 1;
}
}
2. 任何一個迭代的例子都有它的遞歸表示法,反之亦然。
比如請將下列冒泡排序(由大到小)從迭代形式改寫為遞歸形式:
從代碼優雅美觀角度講,遞歸的形式縮進會更少一些,顯得平整。function swap(array, i, j){
const temp = array[i]
array[i] = array[j]
array[j] = temp
}
function bubble(array){
let length = array.length
for(let i = length-1; i&>0; i--){
for(let j=0; j&
answer:
function swap(array, i, j){
const temp = array[i]
array[i] = array[j]
array[j] = temp
}
function bubble(array, i = array.length-1, j = 0){
if(i===1){
return array
}
if(j &> i){
j = 0
i--
}
if(array[j] &< array[j+1]){
swap(array, j, j+1)
}
return bubble(array, i, j+1)
}
遞歸的劣勢
1.遞歸容易產生"棧溢出"錯誤(stack overflow)
因為需要同時保存成千上百個調用記錄,所以遞歸非常耗費內存。這也就是為什麼會有『尾遞歸調用優化』而迭代對於瀏覽器的影響頂多是由於計算量大而發生線程長時間佔用的假死現象,不至於在運行時棧溢出而拋錯的問題。
【備註:後來發現 Chrome v61.0.3163.100 根本沒有做尾遞歸優化處理。】
2.效率方面,遞歸可能存在冗餘計算
使用遞歸的方式會有冗餘計算(比如最典型的是斐波那契數列,計算第6個需要計算第4個和第5個,而計算第5個還需要計算第4個,所處會重複)。迭代在這方面有絕對優勢。
但是這並不表明遞歸可以完全被取代。因為遞歸有更好的可讀性。
迭代:循環結構,例如for,while循環遞歸:選擇結構,例如if else 調用自己,並在合適時機退出
迭代比遞歸時間複雜度好的太多了吧。。遞歸會有很多冗餘計算~
- 深究遞歸和迭代的區別、聯繫、優缺點及實例對比
(是我看到講解遞歸與迭代的區別比較好的一篇文章)
文章有總結兩者之間的關係:1) 遞歸中一定有迭代,但是迭代中不一定有遞歸,大部分可以相互轉換。
2) 能用迭代的不用遞歸,遞歸調用函數,浪費空間,並且遞歸太深容易造成堆棧的溢出./*相對*/
- 在數學上,遞歸強調的是,新的值與前面計算的好幾個值有關係;比如斐波那契數列
而迭代一般是只是與之間進行計算,即;
- 計算機進行演算法分析中,(我對遞歸的複雜度分析不熟,可以去看看《演算法導論》)遞歸方法一般是將遞歸式轉換成樹形結構,然後是不斷向下計算吧;
在常見的迭代法中,有牛頓法與梯度下降法;像Tianyuan解說的那樣,是一種循環逼近的方式,使得初始值進過一系列的迭代之後收斂到極限值。
(再看看維基上的解釋)
我想最主要的是你去用這些具體的方法,才會更加了解其中的一些區別。
舉個例子吧:你要給某個小孩子買玩具。遞歸:你自己不太了解小孩子的需求,為了縮小範圍,讓你的兒子去給孫子挑選。兒子比你強點有限,但依然不太了解小孩子的需求。為了縮小範圍,你又讓你孫子去挑選。如此這般,直到找到合適的玩具。迭代:你挑了一件覺得不行,又挑了一件又不行。如此這般,直到找到合適的玩具。所以一句話:遞歸是自己調用自己,每次旨在縮小問題規模。迭代是自己執行很多次,每次旨在更接近目標。
We begin by considering the factorial function, defined by
n! = n (n 1) (n 2) 3 2 1:ere are many ways to compute factorials. One way is to make useof the observation that n! is equal to n times (n 1)! for any positiveinteger n:n! = n [(n 1) (n 2) 3 2 1] = n (n 1)!:
us, we can compute n! by computing (n 1)! and multiplying theresult by n. If we add the stipulation that 1! is equal to 1, this observationtranslates directly into a procedure:(define (factorial n)(if (= n 1)1(* n (factorial (- n 1)))))41(factorial 6)(* 6 (factorial 5))
(* 6 (* 5 (factorial 4)))(* 6 (* 5 (* 4 (factorial 3))))(* 6 (* 5 (* 4 (* 3 (factorial 2)))))(* 6 (* 5 (* 4 (* 3 (* 2 (factorial 1))))))(* 6 (* 5 (* 4 (* 3 (* 2 1)))))(* 6 (* 5 (* 4 (* 3 2))))(* 6 (* 5 (* 4 6)))(* 6 (* 5 24))(* 6 120)720Figure 1.3: A linear recursive process for computing 6!.We can use the substitution model of Section 1.1.5 to watch this procedurein action computing 6!, as shown in Figure 1.3.Now let』s take a different perspective on computing factorials. Wecould describe a rule for computing n! by specifying that we first multiply1 by 2, then multiply the result by 3, then by 4, and so on until wereach n. More formally, we maintain a running product, together witha counter that counts from 1 up to n. We can describe the computationby saying that the counter and the product simultaneously change fromone step to the next according to the ruleproduct counter * productcounter counter + 1and stipulating that n! is the value of the product when the counterexceeds n.Once again, we can recast our description as a procedure for computingfactorials:2929In a real program we would probably use the block structure introduced in the lastsection to hide the definition of fact-iter:42(factorial 6)(fact-iter 1 1 6)(fact-iter 1 2 6)(fact-iter 2 3 6)(fact-iter 6 4 6)(fact-iter 24 5 6)(fact-iter 120 6 6)(fact-iter 720 7 6)720Figure 1.4: A linear iterative process for computing 6!.(define (factorial n)(fact-iter 1 1 n))(define (fact-iter product counter max-count)(if (&> counter max-count)product(fact-iter (* counter product)(+ counter 1)max-count)))As before, we can use the substitution model to visualize the process ofcomputing 6!, as shown in Figure 1.4.(define (factorial n)(define (iter product counter)(if (&> counter n)product(iter (* counter product)(+ counter 1))))(iter 1 1))We avoided doing this here so as to minimize the number of things to think about atonce.43Compare the two processes. From one point of view, they seemhardly different at all. Both compute the same mathematical function onthe same domain, and each requires a number of steps proportional to nto compute n!. Indeed, both processes even carry out the same sequenceof multiplications, obtaining the same sequence of partial products. Onthe other hand, when we consider the 「shapes」 of the two processes, wefind that they evolve quite differently.Consider the first process. e substitution model reveals a shape ofexpansion followed by contraction, indicated by the arrow in Figure 1.3.e expansion occurs as the process builds up a chain of deferred operations(in this case, a chain of multiplications). e contraction occursas the operations are actually performed. is type of process, characterizedby a chain of deferred operations, is called a recursive process.Carrying out this process requires that the interpreter keep track of theoperations to be performed later on. In the computation of n!, the lengthof the chain of deferred multiplications, and hence the amount of informationneeded to keep track of it, grows linearly with n (is proportionalto n), just like the number of steps. Such a process is called a linear recursiveprocess.By contrast, the second process does not grow and shrink. At eachstep, all we need to keep track of, for any n, are the current values ofthe variables product, counter, and max-count. We call this an iterativeprocess. In general, an iterative process is one whose state can be summarizedby a fixed number of state variables, together with a fixed rulethat describes how the state variables should be updated as the processmoves from state to state and an (optional) end test that specifies conditionsunder which the process should terminate. In computing n!, thenumber of steps required grows linearly with n. Such a process is calleda linear iterative process.44e contrast between the two processes can be seen in another way.In the iterative case, the program variables provide a complete descriptionof the state of the process at any point. If we stopped the computationbetween steps, all we would need to do to resume the computationis to supply the interpreter with the values of the three programvariables. Not so with the recursive process. In this case there is someadditional 「hidden」 information, maintained by the interpreter and notcontained in the program variables, which indicates 「where the processis」 in negotiating the chain of deferred operations. e longer the chain,the more information must be maintained.30In contrasting iteration and recursion, we must be careful not toconfuse the notion of a recursive process with the notion of a recursiveprocedure. When we describe a procedure as recursive, we are referringto the syntactic fact that the procedure definition refers (either directlyor indirectly) to the procedure itself. But when we describe a processas following a paern that is, say, linearly recursive, we are speakingabout how the process evolves, not about the syntax of how a procedureis wrien. It may seem disturbing that we refer to a recursive proceduresuch as fact-iter as generating an iterative process. However, the processreally is iterative: Its state is captured completely by its three statevariables, and an interpreter need keep track of only three variables inorder to execute the process.One reason that the distinction between process and procedure maybe confusing is that most implementations of common languages (includingAda, Pascal, and C) are designed in such a way that the interpretationof any recursive procedure consumes an amount of memory that30When we discuss the implementation of procedures on register machines in Chapter5, we will see that any iterative process can be realized 「in hardware」 as a machinethat has a fixed set of registers and no auxiliary memory. In contrast, realizing a recursiveprocess requires a machine that uses an auxiliary data structure known as astack.45grows with the number of procedure calls, even when the process describedis, in principle, iterative. As a consequence, these languages candescribe iterative processes only by resorting to special-purpose 「loopingconstructs」 such as do, repeat, until, for, and while. e implementationof Scheme we shall consider in Chapter 5 does not share thisdefect. It will execute an iterative process in constant space, even if theiterative process is described by a recursive procedure. An implementationwith this property is called tail-recursive. With a tail-recursiveimplementation, iteration can be expressed using the ordinary procedurecall mechanism, so that special iteration constructs are useful onlyas syntactic sugar.31sicp1.2迭代是逐漸逼近,用新值覆蓋舊值,直到滿足條件後結束,不保存中間值,空間利用率高。遞歸是將一個問題分解為若干相對小一點的問題,遇到遞歸出口再原路返回,因此必須保存相關的中間值,這些中間值壓入棧保存,問題規模較大時會佔用大量內存。
我也一直搞不太清兩者的關係。看了兩篇還不錯的博文,推薦一下:深究遞歸和迭代的區別、聯繫、優缺點及實例對比漫談遞歸和迭代
我個人的理解,不知道對不對。先用一個求n!來解釋兩者迭代:int func(n){int s=1;for(int i=1;i&<=n;i++){s*=i;}return s;}遞歸:int func(n){int s=1;if(n&>1){s=n*func(n-1);}//其他判斷略遞歸:(遞推到回歸)不停調用自己,是迭代的特例,時間複雜度低。迭代:A不停調用B。
迭代著眼微觀過程而遞歸著眼宏觀規律。前者的循環變數是重新綁定的,而後者的循環變數是重複賦值的。
舉個簡單的例子實現斐波那契數列:
遞歸--int fib(int n){ if (n == 1 || n == 2)return 1; else return fib(n - 1) + fib(n - 2);}迭代--int fib2(int n){ int f, g; g = 0; f = 1; while (--n) { f = f + g; g = f - g; } return f;}遞歸描述問題的定義,而迭代則是一種解決過程。
遞歸是函數自己調用自己。有重複計算。效率低
迭代是利用變數的原值推算出變數的新值。效率高遞歸=遞推(把複雜問題不斷推到更簡單)+回歸(獲得最簡單情況後 逐步返回 依次得到複雜問題的解)迭代就是A不停的調用B。舉例如下://這是遞歸int funcA(int n){ if(n &> 1) return n+funcA(n-1); else return 1;}//這是迭代int funcB(int n){ int i,s=0; for(i=1;i s+=i; return s;}看到很多程序員用程序設計中的遞歸調用來解釋遞歸,我認為這不是本質性解釋。1)遞歸的本質是被定義的項在定義完成之前,定義內部又用到了這個項,如"表達式包括兩個表達式相加的式」,第二個出現的『表達式"就是遞歸的。遞歸的本質不是展開過程,而是兩級摺疊形式,上下兩級就定義了所有被摺疊的層級。如 def a() {a();} 這是一個兩級摺疊形式表達的無窮級遞歸。2)而迭代的本質是,每一次迭代都是完整的過程,並替代之前的一個版本,比如產品開發中各個版本,新版本出來後在用戶端替換了舊的版本。3)二者區別很明顯,沒有可類比性。如果非要說形式上的差別,那就是在啟動下一個相同過程(可以是參數不同)之前,上一個過程是否完成。
推薦閱讀: