標籤:

為什麼數學專業要學計算機?

目前國內大學的數學專業都要學計算機,為什麼?


不光是題主,我從前也有類似的疑問,直到我搬了地方,呆到PhD第二年。

首先,計算機並不降低智商。知道怎麼正確的編寫程序、養成良好的編程習慣、選擇適合自己的編程語言,是非常需要智商的事情,而且也是讓你的數學研究事半功倍的事情。

況且,你認為很簡單的問題——定義和敘述都毫無困難的數學問題,你未必有耐心去算,也未必真的能算出來。這個問題打個比方就是,知道怎麼解二元一次方程組不叫會解方程組,知道怎麼把有限域mathbb{F}_5的乘法群用生成元表示成循環群不叫知道怎麼算離散對數,知道怎麼算環面的同調群不叫會算同調群。

因為數學裡具體計算困難的問題相當多,在交換代數、同調代數、表示論和數論里尤其豐富。所以你會發現,就算在21世紀,還有很多的數學家在研究跟平面雙曲幾何甚至2*2矩陣有關的問題。在研究中,你不可能把你的所有時間都花在用手計數、因式分解、乘矩陣上,必須抽出時間抽象的想一點問題,否則不知要做到何年何月。而且如果你不知道一個會terminate的演算法,單純靠手以及「上帝視角」,常常會得到錯誤的結論。這自恃智商高的表現,恰恰是非常驚人的愚蠢。

因此在較為年長的教授看來,年輕的學生的編程能力比較強是非常重要的一個優勢。時間久了你會發現,你學的計算機,和你學的數學,80%的情況下是一回事。

通常這些困難的問題有非常好的理論表述,但是呢,計算問題不解決,這個理論表述用在具體問題上,常常最多是把你知道的東西換個方式說一遍而已。

因此有非常多的計算機代數系統,比如Mathematica, Magma, Sage等等被發展出來,供了解計算機的數學家/學生使用;而且數學家也樂於將他們發展的可計算的理論在計算機中實現出來。首先是為了啟發更多的數學問題、方便數學家/學生把更多精力花在思考上,而且為了目前和將來的實際應用做好基礎。


看到這個問題,推薦閱讀一篇不太長的文章:

http://archive.numdam.org/ARCHIVE/RSMUP/RSMUP_1995__93_/RSMUP_1995__93__187_0/RSMUP_1995__93__187_0.pdf

作者J. W. S Cassels乃是數論大牛人之一。作者開宗明義:

Number theory is an experimental science.

他在文章中舉了三個例子:

1. BSD猜想最初形式以及refined形式的發現;

2. 存在特殊的三次曲面ax^3+by^3+cz^3+dw^3=0,a,b,c,dinmathbb{Z}在任意有限域上有解,但在mathbb{Q}上沒有非平凡解(即解不同於(0,0,0,0));

3. (特別奇妙的例子)設f,ginmathbb{C}[x]且他們都不能寫成p(q(x)), p,qinmathbb{C}[x],q的次數不小於2。且f不能寫成g(ax+b)的形式,其中a,binmathbb{C}

在這個約束下,如果f(x)-g(y)mathbb{C}[x,y]上可約,那麼必有

mathrm{deg}f=mathrm{deg}gin{7,11,13,15,21,31}

這個問題的證明用到了有限單群的分類

上面三個問題無一例外都動用了[當年]大量計算機計算資源。

Cassels最後總結:

I hope I have now said enough to convince you of my thesis that the theory of numbers is an experimental subject and that nowadays the sensible way to experiment is usually on a computer.


四色定理是怎麼證明的?

1976年,1200個小時作了100億次判斷。

弱哥德巴赫猜想怎麼證明的?

2013年,數學證明了其對於10^{30}以上的奇數成立,計算機驗證了其對於10^{30} 以內的奇數成立。

為什麼人們普遍認為黎曼猜想成立?

計算機驗證了其前1,500,000,000個解。

為什麼有人不等式schur分拆幾億係數隨手配?

為什麼π小數點後能算到幾億位?

……

要問數學為什麼,先問計算機是不是。

(依評論建議開宗明義。)

舉個素數的例子。

當年費馬同學數學很厲害,提出了的很多猜想都不叫猜想叫定理了。一天他猜:

F(n)=2^{2^{n}}+1,always,prime.

再後來大家也都知道了:

F(5)=2^{2^5}+1=4294967297=641	imes 6700417

這並不是一個很難得出的結論。事實上,用優化的試除法,我們只需模擬200次除法運算,不超過兩個小時即可完成質因數分解。

不過……

F(6)=2^{2^6}+1=18446744073709551617 =274177	imes 67280421310721

以及……

F(7)=2^{2^7}+1 =340282366920938463463374607431768211457 =59649589127497217	imes 5704689200685129054721

一般來說,判斷一個大數是否是質數是簡單的,藉助Miller_Rabin演算法等,判斷與F(7)同級別的數是否為質數的操作可以在萬分之一秒內完成。

對一個大數進行因數分解和判斷某區間內的質數個數是相對困難,它們分別指向RSA公開密鑰演算法和素數定理。

再以素數定理為例,該定理的證明是困難的(對我來說= =),但對於任何一個數學計算機愛好者來說,藉助計算得出Li(x)approx frac{x}{ln(x)} 的結論卻是相當容易的。

人不可能把生命都花在計算上。

這裡有蠻糟的例子:

1610年,德國的Ludolph van Geuleu將π值計算到小數點後第35位。這一工作幾乎耗費了Ludolph畢生的精力。他去世以後,人們為了紀念他,將這一π值銘刻在他的墓碑上,並稱為Ludolph數。

1874年,英國的William Shanks,他耗費了15年的光陰,算出了圓周率的小數點後707位,並將其刻在了墓碑上作為一生的榮譽。可惜,後人發現,他從第528位開始就算錯了。

1690年,瑞士的James Bernoulli已經耗費了整整一年時間試圖證明懸鏈線與拋物線的等價性。如果他擁有一台計算機,學過簡單的PHP,就可以知道上述猜想是完全錯誤的。他弟弟John Bernoulli用一個晚上解決了該問題。

學計算機未必必要,但確實有用,而且絕不降低智商。

為防James Bernoulli的慘劇重現,要問數學為什麼,先問計算機是不是。

=======================
不華麗的分割線========================

好多贊把我嚇壞了。(果然PHP的梗很有殺傷力。)

多嘴講兩句數學與信息學關係吧……

科幻作家何夕寫過一篇《傷心者》,我不很喜歡,但有一點他說得對,光輝的數學定理們並不總是「有用」,大部分時候他們被束之高閣,無人問津。數學論文們則藏在破敗的圖書館或是交互網路黯淡的節點裡,引用數永遠是個位數……

信息學,屬於數學分支的那部分,情況就要好得多,畢竟新興科學,供不足求,要解決的問題要改進的演算法成堆地砸在你面前,就是閉著眼睛抓一個,成功了就是有用的。

可能很多人幻想過自己穿越到牛頓或是拉格朗日的時代,把自己的名字列在常微分方程解析數論超越方程研究先驅名單上的首位,讓同行們黯然失色。今天,還有人拿這個夢想拐騙青年學生,於是世界上多了很多程序猿。

不完全是程序猿——如果說程序猿對應著應用數學家,演算法學家可不可能跟理論數學家划上等號?

演算法作為數學最年輕最光輝的分支之一,應有的榮光它終將迎接。它會有它的索爾維會議,會有它的危機與革命。最近的幾個,比如技術奇點,人工智慧超越人類智能的時刻,有人猜只差十年了;我個人猜想差二十幾年。

僅僅用計算機加速運算,皮毛而已;到那時,數學研究將變得怎樣?無人知曉;甚至人與計算機的主從地位(在科學研究上的)是否會掉個個,也無人肯定。

為了信息學;為了數學;為了光榮的革命;為了每個未來都不會餓死;不如學點計算機。

以上,2015.08.02。


以下,2017.12.12。

高中時寫的東西,有些觀點在現在的我看來很不恰當;不過對未來的預計始終是樂觀的,那麼剩下的終歸是細節,就不修改了。

近日又頗有些認識的人點贊了這個回答,於此小小地說明下。


不是所有學數學的人都有足夠的智商以數學謀生,教你個計算機是給你賞了口飯吃。


我就不懂了啊。。。學計算機為什麼會降低智商呢?

如果你說的是編程,那麼這也就是個工具。。。所謂技多不壓身啊技多不壓身。。。學會了沒什麼不好的就當高級計算器了,我相信讓數學系的人全部改用算盤是沒幾個人會贊成的。

如果你說的是計算機工程,那麼這和數學就是兩個東西。學了並沒有什麼卵用幾個系統硬體底層還難得要死。估計你說的不是這個。

如果你說的是計算機科學,那麼他的理論部分基本可以算作就是數學的一支。。。圖靈可是一個數學家,P vs NP可是數學問題, Complexity啊概率啊加密啊亂七八糟的都是純正的數學問題,而且是數學大難題,怎麼降低智商了?學這個就和學微積分學集合論一樣有什麼問題嗎?

如果題主覺得計算機太降低智商,數學系的人優秀,那麼我相信P vs NP問題這麼久未解,一定都是我們學計算機的太傻逼,讓數學系的人上分分鐘把懸賞的100萬美元拿走,獲取博士學位,成為人生贏家。題主你在等什麼。


為什麼計算機專業要學馬克思主義?!!!


學計算機會降低智商。。。。。

拿計算機當退路。。。。

學數學的都這麼有優越感么。

看看2002年菲爾茲獎得主Vladimir Voevodsky怎麼說的吧:

Voevodsky』s Mathematical Revolution

On last Thursday at the Heidelberg Laureate Forum, Vladimir Voevodsky gave perhaps the most revolutionary scientific talk I』ve ever heard. I doubt if it generated much buzz among the young scientists in advance, though, because it had the inscrutable title 「Univalent Foundations of Mathematics,」 and the abstract contained sentences like this one: 「Set-theoretic approach to foundations of mathematics work well until one starts to think about categories since categories cannot be properly considered as sets with structures due to the required invariance of categorical constructions with respect to equivalences rather than isomorphisms of categories.」

Eyes glazed over yet?

But what actually happened was this: Voevodsky told mathematicians that their lives are about to change. Soon enough, they』re going to find themselves doing mathematics at the computer, with the aid of computer proof assistants. Soon, they won』t consider a theorem proven until a computer has verified it. Soon, they』ll be able to collaborate freely, even with mathematicians whose skills they don』t have confidence in. And soon, they』ll understand the foundations of mathematics very differently.

Oh, and by the way — just in case the computer scientists in the crowd think that this has nothing to do with them — he also showed that the theory of programming languages is in fact the same thing as homotopy theory, one of the most abstruse areas of mathematics. And both of them are the same thing as mathematical logic. The three fields express the same ideas in very different language. (He only touched on this connection in his talk, though.)

Here』s the story Voevodsky told to support these claims. A bit over a decade ago, he was wrapping up the work for which he』d won his Fields medal, he was out of big ideas in that area of mathematics, and he was looking for a new area to work in. He asked himself, 「What would be the most important thing I could do for math at this period of development and such that I could use my skills and resources to be helpful?」 His first idea was to develop more connections between pure and applied mathematics. 「But I wasted two years,」 he said, 「because I totally failed.」 So he turned to idea number two: To develop software that mathematicians could use in their everyday work, to do proofs.

He started by learning about existing computer proof systems. Rapidly, he figured out that every proof system but one was inadequate for what he had in mind. And that last one — Coq — he couldn』t understand. It was based on something called Martin-L?f type theory, and he just couldn』t get his head around it.

So he did what any good but confused student would do: He took a class in it. And in the middle of the midterm, it finally clicked. He realized that all this type theory stuff could be translated to be equivalent to homotopy theory, his field of mathematics. Not only that, but it could provide a new, self-contained foundation for all of mathematics.

The thing that』s so remarkable about this new foundation is that the fundamental concepts are much closer to where ordinary mathematicians do their work. In the usual foundation, Zermelo-Frankel set theory, it takes an enormous amount of work just to build up the basic concepts and theorems that mathematicians rely on every day. The result is that if you want a computer to check your proofs, you have to teach it all that theory first. Essentially, you have to give it the same education you got — except that you have to do it in a far more exacting way. As a result, the only people so far who have used computer proof systems are computer scientists who specialize in it, and it takes them many years of effort to check a single new theorem. Georges Gonthier, for example, spent a decade checking the four-color theorem.

But this approach circumvents all that labor. Not only that, but the language the computer understands is far closer to natural mathematical language. Yes, mathematicians who want to use a proof assistant will have to learn some things – essentially, it』s learning a programming language – but once they』ve made that investment, the process of using the proof assistant becomes pretty natural. In fact, Voevodsky says, it』s a bit like playing a video game. You interact with the computer. 「You tell the computer, try this, and it tries it, and it gives you back the result of its actions,」 Voevodsky says. 「Sometimes it』s unexpected what comes out of it. It』s fun.」

I asked him if he really thought all mathematicians were going to end up using computers to create their proofs. 「I can』t see how else it will go,」 he said. 「I think the process will be first accepted by some small subset, then it will grow, and eventually it will become a really standard thing. The next step is when it will start to be taught at math grad schools, and then the next step is when it will be taught at the undergraduate level. That may take tens of years, I don』t know, but I don』t see what else could happen.」

He also predicts that this will lead to a blossoming of collaboration, pointing out that right now, collaboration requires an enormous trust, because it』s too much work to carefully check your collaborator』s work. With computer verification, the computer does all that for you, so you can collaborate with anyone and know that what they produce is solid. That creates the possibility of mathematicians doing large-scale collaborative projects that have been impractical until now.

The same week as the Heidelberg Laureate Forum was a conference in Barcelona on univalent foundations, which Voevodsky skipped in order to be with us. A special issue of a journal (perhaps Automated Reasoning — Voevodsky couldn』t quite remember) will come out of the conference, with papers written by the participants. Almost all of them will be submitted together with the formalized proof in Coq. That』s most likely the first time such a thing has ever happened.

Some of those computer verifications rely on a library of verified proofs that Voevodsky himself has created, so Voevodsky decided to submit his library to ArXiv. He imagined a one-page description of what the library is, along with all of his Coq files. It turns out, however, that ArXiv isn』t yet up to the task — while it can accept attached files, they can』t have any directory structure. Voevodsky plans on pestering the folks who run ArXiv until they make it possible.

One more way that he』s leading the revolution.

學數學的無論是偏向理論還是偏嚮應用,都應該學計算機(編程)。

(當然不是所有科目都學)

真不知道某些人的優越感從何而來。。。。

連接:Voevodsky』s Mathematical Revolution

----------------------------------------------------------------------------------------------------------------------------------------

補充:

如果說菲爾茲獎得主什麼的太遙遠了。

我可以再給一個例子,已經有研究表明,學習編程有助於提高數學能力。

Transferring Skills at Solving Word Problems from
Computing to Algebra Through Bootstrap

有論點,有數據,這篇論文應該更有說服力吧。


占坑,有空來答。


在一些特別純的數學領域或許不需要,但是也有許多領域或多或少的需要。

我應該在之前回答中提到過,非線性動力系統相關領域的一個經典笑話是:

A教授拿了一套非線性方程組去問B教授:

A:"Can you solve these equations?"

B:"No, I can"t, they"re nonlinear."

A:"Then what can I do?"

B:"You can get out of my office."

在計算機普及之前,這真是一個沒法辦的問題,尤其是對於Chaotic system。其他的nonlinear system還可以定性的分析、證明一下其動力學行為,比如有幾個不動點,有沒有閉軌,某個吸引子的穩定性如何,圓周映射有沒有周期點等等,而Chaotic system、transient chaotic system等的研究則是相當程度上非模擬不可。此外還有Hilbert第十六問題這樣的二維多項式系統的問題(據我所知還未被解決)對於其變種的情況,目前的一些階段性證明也在一定程度上運用過模擬的手段。

有時候模擬也可以為證明尋找思路,幫人排除錯誤的可能性,把握正確的方向。這一點,在所研究的對象極其複雜、人類當前的分析手段不夠強大的時候是很重要的。

認為沒用,可能與所研究的領域有關;但認為學計算機就降低智商的人,大概本身頭腦不大清晰。


其實,學習計算機是為了讓學生今後不想學習數學的時候可以有一個謀生的技能。


簡而言之,計算機相當於給你大腦的運算部分開了個加速的外掛…


擦,你的智商這麼敏感?


主要目的是把智商不夠好的人篩掉。


謝@洋蛋邀O(∩_∩)O我想邀請我的原因只有倆,一我是數學專業學生,二我不喜歡編程...

其實題主的問題具有一定的代表性,至少我周圍一些不擅長計算機的同學(必須有我)都有過這種困惑。尤其我們很多女生,每次上實驗課或者做大作業都是無比憂傷,看見班裡那些大神寫下一段段美妙(費解)的代碼都稱讚(羨慕)不已。。。

但是,這種問題也就是抱怨一下下就好了,隨著學習的深入,自己會發現有時間在這怨天尤人,還不如多看幾個演算法,多敲幾行代碼~~

=====================================

題主的問題我想細分為兩方面回答:

1.學數學不一定非要學計算機

2.並不是只有數學專業的學生學習計算機

首先,談談數學。數學大體可以分為——基礎數學、計算數學、應用數學、運籌學與控制論、概率論與數理統計(統計學其實可以單獨作為一門學科)。

其實我的理解是,如果你學習基礎數學從事理論研究,並且你足夠聰明,對數學有極高的天賦,那麼真心不太需要很多計算機知識,一張紙,一根筆再加上你聰明過人的頭腦就夠了。。

對於應用數學和計算數學,正如它的名字所示,已經不單單是數學了,更多的是應用於其他背景,解決諸如力學、電磁場、熱傳導、擴散等現實問題。如果你還想逃掉計算機這關,那你的工作永遠都不會100%完成了。舉個簡單的例子,這兩個方向的同學,大多數應該上過《計算方法》這門課,只學數學,我們會懂,求解線性方程組可以用高斯消去、LU分解等直接法,也可以使用雅可比(Jacobi)、高斯-賽德爾(Gauss_Seidel)、逐次鬆弛(SOR)等迭代法。也許你還會「得意洋洋」的推導出各種方法的收斂階,然而並沒有什麼用,因為人家需要的是解啊!給你一張紙,一根筆,你來算算上千階的矩陣...

再比如《偏微分數值解》這門課程,你懂得求解微分方程的差分法需要先將求解區域網格剖分,再將微分方程差分離散,最後進行求解。可是,不懂計算機,你連第一步都無法進行,你難道又要拿來紙筆,用手將求解區域進行幾百份、幾千份的矩形或者三角形的網格剖分嗎?畫面真美(嘿嘿)。。就算你厲害到人神共憤,好吧,剖分完了,還要求解每一個網格點處的數值解,你還堅持的下去嗎?對了,計算機可以自適應剖分網格,你也可以嗎?

運籌學和控制論:它以數學和計算機為主要工具,從系統和信息處理的觀點出發,研究解決社會、經濟、金融、軍事、生產管理、計劃決策等各種系統的建模、分析、規劃、設計、控制及優化問題(摘自百度百科)。所以,怎麼能不學計算機呢?(其實我挺喜歡優化這個方向的...)

對於概率與統計,我了解不多,只是學完概率論,寫過一個蒙特卡洛演算法模擬蒲豐投針實驗的程序。可我記得編程計算一些古典概型,還有一些實驗的計算機模擬都會涉及到計算機啊。。統計學據說他們也有很多軟體需要學習,R語言好像也很有用處,畢竟,數據的分析處理不靠計算機真心很難~~

其次,並不是只有數學專業學計算機。現在大學裡幾乎所有理工科都要學習計算機知識,至少學習一門程序語言。我認識管理學院的同學,他們學的計算機課程比我們還要多,他們學c++、java、微機原理、數據結構、資料庫...在這個信息化時代,計算機的作用真的非同小可,各行各業幾乎都要使用計算機,所以我認為大學裡學些計算機知識絕對不是浪費時間,更不會拉低智商。開設這些課程有一定的合理性,那些對計算機不太感興趣的同學,多了解一些專業之外的知識並無壞處,而對於那些非計算機專業,學著學著發現自己無比喜歡計算機的同學來說更是大有裨益,甚至會為他們提供二次選擇的機會。我身邊就有這樣的例子,本科非計算機出身,可是畢業後有深入學習計算機的,也有由於編程能力突出進入IT行業的。所以,認真學習計算機知識,無論你喜歡或是討厭,總會有收穫的!

=====================================

記得有人告訴過我「凡有所學,皆成性格」。事實的確如此,當你懷著好奇的心態認真學習時,總有某些東西會啟發你,影響你,更何況你學的不是拉低智商的歪門邪道,而是21世紀生存三大必須技能(計算機、英語、駕駛技能)之一的計算機知識,所以,提完問題就好好學習數學和計算機吧!我乖乖地看灰太狼給我的《C++.Primer.Plus.》去了~~~~~


任何專業都應該學習計算機。


I think the introduction of computational methods in mathematics is revolutionary. Before that, we do not know how to computationally verify our results" validity, and even in cases that we already knew it is true, we also lack empirical data to have a refined understanding of the theorem.

One can imagine that before the wide employment of computational methods in mathematics, questions like "Will conjecture A fail if we change the condition of A slightly? " to be very subtle. If conjecture A is believed to be true for theoretical reasons, its modification may not be true without sufficient theoretical work to prove it. But then why should we believe that a modification of A might be true? Why should we waste our time to work on something that might turned out to be false in the end? What if the verification of the modification of A is computationally intensive even for the beginning few base cases? Why should I be even interested in it, given the fact it is tedious, difficult work?

And this is just the tip of the iceberg.

Mathematicians do not work in isolation. They need examples to motivate their research. If someone proved a "big theorem" abstractly with hundreds of pages, but the theorem has only a few trivial applications, then the theorem is not very useful because unless the idea that goes into the proof is truly outstanding, we cannot build much future work from it. In other words, good mathematics should give us "good examples" that motivates further research. Computational methods is extremely useful in this regard, because we can get a lot of examples from actual computation, and a lot of theoretical questions can only be answered either by direct proof or by finding a counter example. We are lucky if the counterexample is simple and can be engineered with a combination of well known techniques. But what if it is "essentially random"? Then we have to check by resorting to actual computation to find it. If we know why these counter examples occur by analyzing the data, then we would gain a better understanding of the underlying mathematical problem.

On the other hand, it has been gradually recognized that even first class mathematicians produce errors in their work. Sometimes people "proved" theorems that are difficult to read because it consists of many intermediate steps, and obviously in the editing process all important intermediate steps needs to be verified. An experienced editor can catch up some of the errors, but certainly not all. In the end a paper that contains a final result which might be wrong can be published, and further mathematicians citing his/her work blindly will only build more false statements. Consider an extreme case scenario: it is now well known that Fermat"s last theorem is true. But suppose the proof still has a gap (like the situation before 1994), can someone check it and repair it? The chances are unless "someone" is well versed with the current research in algebraic number theory, it will be impossible for him or her to know whether the gap exists, not to say how to repair it. The problem would be partly solved if the proof can be converted into data that can be verified every step by the machine, such that if the proof does not work, then the machine would be able to find an error. In fact, the foundation work on this has been done, but more mathematician needs to be familiar with computer science to know how to work with it.

I want to quote a real life example at here. In my own past research, I need to check cases of computational complexity O(n!*n!). In other words, the first base case needs me to check 4 sub-cases. The second base case needs me to check 36 sub-cases. The third base case will need me to check 576 sub-cases, while he list keeps ongoing. Without explicitly checking them in Sage, I will have no idea whether my method works in general without a proof. In fact, the actual computation found one of my strategies turned out to be wrong - it found a counterexample in the actual 36 sub-cases. My advisor and I were genuinely surprised. Even if I knew the proof, as I finally proved it now, I still need to implement it computationally to make it useful for other"s research. For they will not be really interested in my actual (highly abstract) proof, but how to use the results I proved to prove more interesting things. My field - Lie groups and K-theory - was well known to be abstract. But to do actual research in it, I have to be able to work with examples concretely to check the validity of my claims. As a pure mathematician, I think programming is definitely helpful. My only regret is I did not learn it sooner and mastered it better.

Reference:

http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf


為了讓你畢業有口飯吃,免得去考公務員。


這年頭,學編程就跟拿駕照一樣,你不會也不是不行,但說你和現代社會有些脫節不過分吧?所以這不只是數學專業的問題。弊校有門課叫文科計算機,所有文科專業必修。


很久以前看過的一本書,上面好像是Erdos說的吧,大概的意思是說很多定理或者規律我們沒有發現,是人類的計算能力太弱了。

而計算機可以在很大程度上彌補這一點,在人類的所有發明當中,其它發明都是一定程度上代替四肢和五官的功能,只有計算機的發明是一定程度上代替大腦的功能。


說實話,不學計算機就等於回到100年前去研究數學。那麼好的一個輔助工具,為什麼不學?難道你以為所有的猜想都是憑空造出來的?很多複雜計算人手算根本不可能解決,而用好計算機,完全可以把人從計算的牢籠里解放出來。而且計算機呈現能把很多問題具象化,容易發現其幾何意義,退化情況,不變數等等。

建議樓主先試學一下mathematica,體驗一下軟體的力量,再回過頭來看這個問題

我覺得任何一個數學專業的學生都應該學mathrmatica,maple,sega,singular……

附:mathematica創始人斯蒂芬.沃爾夫勒姆的演講

http://www.ted.com/talks/stephen_wolfram_computing_a_theory_of_everything?language=zh-cn


推薦閱讀:

Clifford代數是什麼,它和dirac運算元有什麼關係?
物理學中的微元法是一種錯誤的方法嗎?
跨語言系統的密碼破解原理是怎麼樣的?
數學裡有沒有不能被範疇化的概念?
數學裡有哪些精彩的偽證?

TAG:數學 | 計算機 |