「遞歸」和「迭代」有哪些區別?


遞歸:

】】】

】】】】】】

...

】】】】】】】】】

】】】】】】】】】】】】

】】】】】】】】】

...

】】】】】】

】】】

迭代:

】】】

】】】

】】】

】】】

...

】】】

一點拙見。


電影故事例證:

迭代——《明日邊緣》

遞歸——《盜夢空間》


迭代是將輸出做為輸入,再次進行處理。比如將攝像頭對著顯示器;比如鏡子對著鏡子;比如KTV中將麥克對著音箱;比如機關槍扣動扳機發射子彈後,利用后座力繼續扣動扳機。

用程序表述就是:for (int i=0; i &< 100; i++) n = f(n);

比如下面這段生成分形圖像的代碼:

Shadertoy BETA

vec2 center = vec2(0.8,0);
float blue = 0.3;

void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
vec2 z, c;
float scale = 20.0/iGlobalTime; // zoom in
vec2 uv = fragCoord.xy / iResolution.xy;

c.x = 1.3333 * (uv.x - 0.5) * scale - center.x;
c.y = (uv.y - 0.5) * scale - center.y;
z = c;
vec4 col = vec4(0,0,0,0);

for(int i=0; i&<100; i++) { float x = (z.x * z.x - z.y * z.y) + c.x; float y = (z.y * z.x + z.x * z.y) + c.y; blue += 0.028; if((x * x + y * y) &> 4.0) {
col = vec4(0,0,blue,0);
break;
}
z.x = x;
z.y = y;
}
fragColor = col;
}

程序中迭代了100遍,迭代次數越多,生成的圖像細節度越高。

再給迭代舉個通俗點的例子:假如你有一條哈士奇和一條中華田園犬,怎麼讓它們串出比較純正的哈士奇呢?先讓哈士奇與中華田園犬配對,生下小狗。再讓哈士奇與小狗配對,當然要等小狗長大後。就這樣一直讓哈士奇與新生的小狗配對,一代一代地迭,最終你能得到比較純正的哈士奇。如果你糾結貓三狗四,豬五羊六,牛七馬八這樣的自然規律,不妨把兩條狗改為老鼠與寵物倉鼠,他們一個月就能迭代一次。

遞歸,簡講就是自己調用自己,自己包含自己。

用程序表述就是:void f(int n){f(n - 1);}不要在意這是死循環代碼,只需知道這個函數中,又調用了函數自身,屬於自己調用自己。

比如,顯示器中的顯示器,鏡子中的鏡子。我前面寫著:攝像頭對著顯示器,鏡子對著鏡子是迭代,怎麼現在又改成遞歸了?這不矛盾,因為攝像頭對著顯示器,鏡子對著鏡子這種行為是輸出做為輸入,再次進行處理,所以是迭代。顯示器中的顯示器,鏡子中的鏡子這種效果是自己包含自己,所以是遞歸。如同上面那幅圖像,生成它的代碼是迭代,而分形的效果是遞歸。


迭代是更新變數的舊值。遞歸是在函數內部調用自身


遞歸是一個樹結構,每個分支都探究到最遠,發現無法繼續的時候往回走

每個節點只會訪問一次

迭代是一個環結構,每次迭代都是一個圈,不會拉掉其中的某一步,然後不斷循環

每個節點都會被循環訪問


以前看過一段英文的解釋兩者的不同,現在沒有搜到原文。大意是遞歸過程中, 問題的規模在縮小,這樣最終得到問題的解;而迭代是一種由遠變近的逼近,問題的規模不見得縮小了,但是慢慢在調整接近答案。遞歸求解n的階乘過程,非常符合這個描述;而數值分析課程里的許多方法,比如牛頓迭代法,也是符合這個描述的。


二者相互關係

1. 從計算機角度講,遞歸是迭代的特例。

這個例子是兩種方式計算階乘的javascript代碼實現,可以在瀏覽器中,按F12調出控制台,在控制台中進行實驗。

// 迭代,重複一定的演算法,達到想要的目的。數學上二分法,牛頓法是很好的迭代例子
function iteration(x){
var sum=1;
for(x; x&>=1; x--){
sum = sum*x;
}
}

// 遞歸,自身調用自身的迭代就是遞歸。
// 但是正式定義好像不是這麼說的。這只是我個人理解
function recursion(x){
if(x&>1){
return x*recursion(x-1);
}else{
return 1;
}
}

2. 任何一個迭代的例子都有它的遞歸表示法,反之亦然。

比如請將下列冒泡排序(由大到小)從迭代形式改寫為遞歸形式:

function swap(array, i, j){
const temp = array[i]
array[i] = array[j]
array[j] = temp
}
function bubble(array){
let length = array.length
for(let i = length-1; i&>0; i--){
for(let j=0; j&

answer:

function swap(array, i, j){
const temp = array[i]
array[i] = array[j]
array[j] = temp
}
function bubble(array, i = array.length-1, j = 0){
if(i===1){
return array
}
if(j &> i){
j = 0
i--
}
if(array[j] &< array[j+1]){ swap(array, j, j+1) } return bubble(array, i, j+1) }

從代碼優雅美觀角度講,遞歸的形式縮進會更少一些,顯得平整。

遞歸的劣勢

1.遞歸容易產生"棧溢出"錯誤(stack overflow)

因為需要同時保存成千上百個調用記錄,所以遞歸非常耗費內存。這也就是為什麼會有『尾遞歸調用優化』而迭代對於瀏覽器的影響頂多是由於計算量大而發生線程長時間佔用的假死現象,不至於在運行時棧溢出而拋錯的問題。

【備註:後來發現 Chrome v61.0.3163.100 根本沒有做尾遞歸優化處理。】

2.效率方面,遞歸可能存在冗餘計算

使用遞歸的方式會有冗餘計算(比如最典型的是斐波那契數列,計算第6個需要計算第4個和第5個,而計算第5個還需要計算第4個,所處會重複)。迭代在這方面有絕對優勢。

但是這並不表明遞歸可以完全被取代。因為遞歸有更好的可讀性。


迭代:循環結構,例如for,while循環

遞歸:選擇結構,例如if else 調用自己,並在合適時機退出


迭代比遞歸時間複雜度好的太多了吧。。遞歸會有很多冗餘計算~


  • 深究遞歸和迭代的區別、聯繫、優缺點及實例對比

(是我看到講解遞歸與迭代的區別比較好的一篇文章)

文章有總結兩者之間的關係:

1) 遞歸中一定有迭代,但是迭代中不一定有遞歸,大部分可以相互轉換。

2) 能用迭代的不用遞歸,遞歸調用函數,浪費空間,並且遞歸太深容易造成堆棧的溢出./*相對*/

  • 在數學上,遞歸強調的是,新的值與前面計算的好幾個值有關係;比如斐波那契數列F_{n} =F_{n-1} +F_{n-2},  left( ngeq 2 
ight)

而迭代一般是只是x_{n+1} x_{n} 之間進行計算,即x_{n+1} =fleft( x_{n} 
ight) ;

  • 計算機進行演算法分析中,(我對遞歸的複雜度分析不熟,可以去看看《演算法導論》)遞歸方法一般是將遞歸式轉換成樹形結構,然後是不斷向下計算吧;

在常見的迭代法中,有牛頓法與梯度下降法;像Tianyuan解說的那樣,是一種循環逼近的方式,使得初始值進過一系列的迭代之後收斂到極限值。

(再看看維基上的解釋)

我想最主要的是你去用這些具體的方法,才會更加了解其中的一些區別。


舉個例子吧:你要給某個小孩子買玩具。

遞歸:你自己不太了解小孩子的需求,為了縮小範圍,讓你的兒子去給孫子挑選。兒子比你強點有限,但依然不太了解小孩子的需求。為了縮小範圍,你又讓你孫子去挑選。如此這般,直到找到合適的玩具。

迭代:你挑了一件覺得不行,又挑了一件又不行。如此這般,直到找到合適的玩具。

所以一句話:遞歸是自己調用自己,每次旨在縮小問題規模。迭代是自己執行很多次,每次旨在更接近目標。


We begin by considering the factorial function, defined by

n! = n (n 1) (n 2) 3 2 1:

ere are many ways to compute factorials. One way is to make use

of the observation that n! is equal to n times (n 1)! for any positive

integer n:

n! = n [(n 1) (n 2) 3 2 1] = n (n 1)!:

us, we can compute n! by computing (n 1)! and multiplying the

result by n. If we add the stipulation that 1! is equal to 1, this observation

translates directly into a procedure:

(define (factorial n)

(if (= n 1)

1

(* n (factorial (- n 1)))))

41

(factorial 6)

(* 6 (factorial 5))

(* 6 (* 5 (factorial 4)))

(* 6 (* 5 (* 4 (factorial 3))))

(* 6 (* 5 (* 4 (* 3 (factorial 2)))))

(* 6 (* 5 (* 4 (* 3 (* 2 (factorial 1))))))

(* 6 (* 5 (* 4 (* 3 (* 2 1)))))

(* 6 (* 5 (* 4 (* 3 2))))

(* 6 (* 5 (* 4 6)))

(* 6 (* 5 24))

(* 6 120)

720

Figure 1.3: A linear recursive process for computing 6!.

We can use the substitution model of Section 1.1.5 to watch this procedure

in action computing 6!, as shown in Figure 1.3.

Now let』s take a different perspective on computing factorials. We

could describe a rule for computing n! by specifying that we first multiply

1 by 2, then multiply the result by 3, then by 4, and so on until we

reach n. More formally, we maintain a running product, together with

a counter that counts from 1 up to n. We can describe the computation

by saying that the counter and the product simultaneously change from

one step to the next according to the rule

product counter * product

counter counter + 1

and stipulating that n! is the value of the product when the counter

exceeds n.

Once again, we can recast our description as a procedure for computing

factorials:29

29In a real program we would probably use the block structure introduced in the last

section to hide the definition of fact-iter:

42

(factorial 6)

(fact-iter 1 1 6)

(fact-iter 1 2 6)

(fact-iter 2 3 6)

(fact-iter 6 4 6)

(fact-iter 24 5 6)

(fact-iter 120 6 6)

(fact-iter 720 7 6)

720

Figure 1.4: A linear iterative process for computing 6!.

(define (factorial n)

(fact-iter 1 1 n))

(define (fact-iter product counter max-count)

(if (&> counter max-count)

product

(fact-iter (* counter product)

(+ counter 1)

max-count)))

As before, we can use the substitution model to visualize the process of

computing 6!, as shown in Figure 1.4.

(define (factorial n)

(define (iter product counter)

(if (&> counter n)

product

(iter (* counter product)

(+ counter 1))))

(iter 1 1))

We avoided doing this here so as to minimize the number of things to think about at

once.

43

Compare the two processes. From one point of view, they seem

hardly different at all. Both compute the same mathematical function on

the same domain, and each requires a number of steps proportional to n

to compute n!. Indeed, both processes even carry out the same sequence

of multiplications, obtaining the same sequence of partial products. On

the other hand, when we consider the 「shapes」 of the two processes, we

find that they evolve quite differently.

Consider the first process. e substitution model reveals a shape of

expansion followed by contraction, indicated by the arrow in Figure 1.3.

e expansion occurs as the process builds up a chain of deferred operations

(in this case, a chain of multiplications). e contraction occurs

as the operations are actually performed. is type of process, characterized

by a chain of deferred operations, is called a recursive process.

Carrying out this process requires that the interpreter keep track of the

operations to be performed later on. In the computation of n!, the length

of the chain of deferred multiplications, and hence the amount of information

needed to keep track of it, grows linearly with n (is proportional

to n), just like the number of steps. Such a process is called a linear recursive

process.

By contrast, the second process does not grow and shrink. At each

step, all we need to keep track of, for any n, are the current values of

the variables product, counter, and max-count. We call this an iterative

process. In general, an iterative process is one whose state can be summarized

by a fixed number of state variables, together with a fixed rule

that describes how the state variables should be updated as the process

moves from state to state and an (optional) end test that specifies conditions

under which the process should terminate. In computing n!, the

number of steps required grows linearly with n. Such a process is called

a linear iterative process.

44

e contrast between the two processes can be seen in another way.

In the iterative case, the program variables provide a complete description

of the state of the process at any point. If we stopped the computation

between steps, all we would need to do to resume the computation

is to supply the interpreter with the values of the three program

variables. Not so with the recursive process. In this case there is some

additional 「hidden」 information, maintained by the interpreter and not

contained in the program variables, which indicates 「where the process

is」 in negotiating the chain of deferred operations. e longer the chain,

the more information must be maintained.30

In contrasting iteration and recursion, we must be careful not to

confuse the notion of a recursive process with the notion of a recursive

procedure. When we describe a procedure as recursive, we are referring

to the syntactic fact that the procedure definition refers (either directly

or indirectly) to the procedure itself. But when we describe a process

as following a paern that is, say, linearly recursive, we are speaking

about how the process evolves, not about the syntax of how a procedure

is wrien. It may seem disturbing that we refer to a recursive procedure

such as fact-iter as generating an iterative process. However, the process

really is iterative: Its state is captured completely by its three state

variables, and an interpreter need keep track of only three variables in

order to execute the process.

One reason that the distinction between process and procedure may

be confusing is that most implementations of common languages (including

Ada, Pascal, and C) are designed in such a way that the interpretation

of any recursive procedure consumes an amount of memory that

30When we discuss the implementation of procedures on register machines in Chapter

5, we will see that any iterative process can be realized 「in hardware」 as a machine

that has a fixed set of registers and no auxiliary memory. In contrast, realizing a recursive

process requires a machine that uses an auxiliary data structure known as a

stack.

45

grows with the number of procedure calls, even when the process described

is, in principle, iterative. As a consequence, these languages can

describe iterative processes only by resorting to special-purpose 「looping

constructs」 such as do, repeat, until, for, and while. e implementation

of Scheme we shall consider in Chapter 5 does not share this

defect. It will execute an iterative process in constant space, even if the

iterative process is described by a recursive procedure. An implementation

with this property is called tail-recursive. With a tail-recursive

implementation, iteration can be expressed using the ordinary procedure

call mechanism, so that special iteration constructs are useful only

as syntactic sugar.31

sicp1.2


迭代是逐漸逼近,用新值覆蓋舊值,直到滿足條件後結束,不保存中間值,空間利用率高。

遞歸是將一個問題分解為若干相對小一點的問題,遇到遞歸出口再原路返回,因此必須保存相關的中間值,這些中間值壓入棧保存,問題規模較大時會佔用大量內存。


我也一直搞不太清兩者的關係。

看了兩篇還不錯的博文,推薦一下:

深究遞歸和迭代的區別、聯繫、優缺點及實例對比

漫談遞歸和迭代


我個人的理解,不知道對不對。

先用一個求n!來解釋兩者

迭代:

int func(n){

int s=1;

for(int i=1;i&<=n;i++){

s*=i;}

return s;

}

遞歸:

int func(n){

int s=1;

if(n&>1){

s=n*func(n-1);

}

//其他判斷略

遞歸:(遞推到回歸)不停調用自己,是迭代的特例,時間複雜度低。

迭代:A不停調用B。


迭代著眼微觀過程而遞歸著眼宏觀規律。前者的循環變數是重新綁定的,而後者的循環變數是重複賦值的。


舉個簡單的例子實現斐波那契數列:

遞歸--

int fib(int n)

{

if (n == 1 || n == 2)return 1;

else return fib(n - 1) + fib(n - 2);

}

迭代--

int fib2(int n)

{

int f, g;

g = 0;

f = 1;

while (--n)

{

f = f + g;

g = f - g;

}

return f;

}


遞歸描述問題的定義,而迭代則是一種解決過程。


遞歸是函數自己調用自己。有重複計算。效率低

迭代是利用變數的原值推算出變數的新值。效率高

遞歸=遞推(把複雜問題不斷推到更簡單)+回歸(獲得最簡單情況後 逐步返回 依次得到複雜問題的解)

迭代就是A不停的調用B。

舉例如下:

//這是遞歸

int funcA(int n)

{

if(n &> 1)

return n+funcA(n-1);

else

return 1;

}

//這是迭代

int funcB(int n)

{

int i,s=0;

for(i=1;i s+=i;

return s;

}


看到很多程序員用程序設計中的遞歸調用來解釋遞歸,我認為這不是本質性解釋。1)遞歸的本質是被定義的項在定義完成之前,定義內部又用到了這個項,如"表達式包括兩個表達式相加的式」,第二個出現的『表達式"就是遞歸的。遞歸的本質不是展開過程,而是兩級摺疊形式,上下兩級就定義了所有被摺疊的層級。如 def a() {a();} 這是一個兩級摺疊形式表達的無窮級遞歸。2)而迭代的本質是,每一次迭代都是完整的過程,並替代之前的一個版本,比如產品開發中各個版本,新版本出來後在用戶端替換了舊的版本。3)二者區別很明顯,沒有可類比性。如果非要說形式上的差別,那就是在啟動下一個相同過程(可以是參數不同)之前,上一個過程是否完成。


推薦閱讀:

c++ 二叉樹的中序遍歷(非遞歸實現)是哪裡出錯了?
客戶端產品迭代周期為多長時間比較合適?

TAG:編程 | 迭代 | 遞歸 |