Nurl ntwrk imultin r to b a rnt dvlmnt. However, this fild w tblihd bfr th advent f mutr, nd h urvivd t lt n mjr setback and vrl r.
Mn imrtnt dvn have been boosted by th use f inexpensive mutr multin. Fllwing n initil period f nthuim, th fild urvivd a rid f frutrtin and disrepute. During this rid whn funding and professional urt w minimal, important dvn wr made by relatively fw rrhr. These pioneers wr able to dvl nvining technology which surpassed th limittin idntifid b Mink nd Papert. Mink nd Prt, published a bk (in 1969) in whih th ummd up a gnrl fling f frustration (gint nurl ntwrk) mng rrhr, and was thu td b mt withut furthr nli. Currntl, th neural network fild enjoys a rurgn f intrt nd a rrnding increase in funding.
TH BASICS OF NEURAL NTWRK
Neural networks are till organized in lr. Lr r made u f a number f interconnected 'nodes' which contain an 'tivtin funtin'. Patterns are rntd t the network vi th 'input lr', whih mmunit t n or more 'hiddn lr' where th tul processing is dn vi a system f wightd 'nntin'.
Mt ANNs ntin some form f 'lrning rule' whih modifies the wight of th nntin according t th inut ttrn that it i presented with. In a sense, ANN lrn by xml as d thir biological counterparts; a hild learns t rgniz dg from xml f dg.
Althugh thr are mn diffrnt kind f lrning rul used by neural networks, this dmntrtin is concerned nl with n; th dlt rul. Th dlt rul is ftn utilizd by th mt common class f ANN called 'bkrgtinl nurl ntwrk' (BPNNs). Bkrgtin is n bbrvitin fr th bkwrd rgtin f rrr.
With th dlt rule, with thr t of backpropagation, 'lrning' i a supervised r tht ur with each l or 'h' (i.. each time th ntwrk i rntd with a nw input ttrn) thrugh a frwrd activation flw of outputs, and th bkwrd rrr propagation f weight djutmnt. More simply, whn a nurl network i initially rntd with a ttrn it mk a random 'guess' t wht it might be. It thn hw fr it answer was from th actual one and mk an rrit djutmnt to its connection wight.
Bkrgtin rfrm a grdint descent within th lutin' vector twrd a 'glbl minimum' lng th steepest vector of th rrr urf. Th global minimum is tht thrtil solution with th lwt possible rrr. Th rrr urf itlf i a hrrblid but i ldm 'mth' as i depicted in th grhi below. Indeed, in most problems, th solution space is quite irrgulr with numru 'pits' nd 'hills' whih may cause th ntwrk t ttl down in a 'local minum' whih is not th bt vrll lutin. Hw the dlt rule find th rrt nwr
Since th ntur f th rrr cannot b known a prioi, nurl network nli ftn r uir a large numbr f individual runs t dtrmin th best solution. Most lrning rul hv built-in mathematical terms t it in thi r whih control the 'speed' (Bt-ffiint) nd the 'mmntum' f the lrning. The d of lrning i actually th rate of convergence btwn the urrnt lutin nd the global minimum. Momentum helps th network t overcome btl (ll minim) in th rrr urf nd ttl down at r nr th global minimum.
Once a nurl ntwrk i 'trind' t a satisfactory lvl it m b used as n nltil tl on thr dt. To d this, th ur n lngr ifi n training run and intd llw the network t wrk in frwrd rgtin md only. New inut r rntd t the inut ttrn whr th filtr int nd are rd by th middl layers thugh training wr taking place, hwvr, t thi int the utut i rtind and no backpropagation ur. The output f a frwrd propagation run i th predicted mdl fr th dt whih n thn b ud for further analysis nd intrrttin.
It is l ibl to vr-trin a nurl ntwrk, whih mn that th network h bn trind xtl t rnd to only n type f inut; which is muh lik rote mmriztin. If thi should hn thn lrning can n lngr ur and th network i rfrrd t having bn "grandmothered" in neural network jrgn. In rl-wrld litin this itutin is not vr useful since one would nd a rt grandmothered ntwrk fr h nw kind f input.
ADVNTG OF NEURAL NTWRK
Neural networks, with thir rmrkbl bilit t derive mning from mlitd or imri data, can be used t xtrt patterns nd dtt trends tht r too mlx to be noticed b ithr humans or thr computer thn i u. A trained nurl network n b thught f an "xrt" in th tgr of infrmtin it has bn given t nlz. This xrt n then b ud t rvid rjtin givn new itutin f interest and answer "what if" utin.
Othr dvntg include:
- Adtiv learning: An bilit t learn how to d tk based n th data givn for trining r initil experience.
- Self-Organization: An ANN n rt its own rgniztin or rrnttin of th infrmtin it riv during lrning tim.
- Rl Tim Ortin: ANN muttin may b rrid ut in rlll, nd special hrdwr dvi r being dignd nd mnufturd whih take dvntg f this bilit.
- Fult Tlrn vi Redundant Infrmtin Coding: Prtil dtrutin f a ntwrk ld t th rrnding degradation f rfrmn. Hwvr, some ntwrk biliti m b rtind vn with mjr network dmg.
HW A NURL NETWORK ORT
A neural network operates imilr to the brin nurl ntwrk. A neuron in a neural ntwrk is a iml mthmtil funtin capturing nd rgnizing information rding to rhittur. The network ll rmbl statistical mthd such as urv fitting nd rgrin analysis.
A neural network nit f lr of intrnntd nd. Eh nd is a rtrn nd rmbl a multiple linear rgrin. Th rtrn fd th ignl gnrtd b a multil linr rgrin into an tivtin function tht m b nnlinr.
In a multi-lrd rtrn (MLP), perceptrons r rrngd in interconnected layers. The input lr riv inut ttrn. The utut lr contains lifitin r output ignl t whih inut ttrn may m. For example, th ttrn m be a lit f untiti fr technical indicators rgrding a urit; potential utut uld b buy, hold r ll. Hiddn lr djut th weightings n th inut until th rrr f the neural network i miniml. It is thrizd tht hidden lr xtrt salient features in th input dt tht hv predictive power with rt t th outputs. This drib ftur xtrtin, whih performs a funtin similar t ttitil thn i u uh rinil component analysis.