Trending ▼   ResFinder  

2003 Course Artificial Neural Networks

5 pages, 34 questions, 0 questions with responses, 0 total responses,    0    0
pune_eng
  
+Fave Message
 Home > pune_eng >

Instantly get Model Answers to questions on this ResPaper. Try now!
NEW ResPaper Exclusive!

Formatting page ...

Total No. of Questions : 12] P944 [Total No. of Pages :5 [3664] - 201 B.E. (E & TC) ARTIFICIAL NEURAL NETWORKS (2003 Course) Time : 3 Hours] [Max. Marks:100 Instructions to the candidates: 1) Answer 3 questions from Section I and 3 questions from Section II. 2) Answers to the two sections should be written in separate answer books. 3) Neat diagrams must be drawn wherever necessary. 4) Use of logarithmic tables, slide rule, Mollier charts, electronic pocket calculator and steam tables is allowed. 5) Assume suitable data, if necessary. SECTION - I Q1) a) The logic networks shown in Fig 1 a) and Fig 1b) uses the McCulloch pitts model neuron. Show that both the networks are logically equivalent of each other. [8] The network uses following transfer function f(net) = 0 net < 0 = 1 net > 0 Tabulate the answers by showing intermediate outputs. P.T.O. b) Design following two i|p gates using McCulloch Pitts Model. Clearly show the threshold for each neurons, truth table and the transfer function used. [8] i) c) NAND ii) NOR iii) Exclusive - or (X O R). Draw the structure of Biological Neuron & label the various parts. [2] OR Q2) a) What is the noise saturation dilemma in activation dynamics. Explain how shunting activation model solves this dilemma. [8] b) Explain how to interpret perceptron learning as a gradient descent algorithm. What is the gradient term here. [5] Explain the distinction between supervised, unsupervised & Reinforcement learning. [5] c) Q3) a) Design an Adaline network with the inputs x1 and x2. The corresponding targets are t1 and t2 respectively. Train the adaline network with learning rate ( ) equal to 0.4 for three iterations 1 x1 = 1 1 t1 = [ 1] 1 x2 = 1 1 t2 = [1 ] Assume initial weight vector [ 0 0 0]. Find error in each iteration. [3664]-201 2 [8] b) Explain the various steps involved in the training of Back propagation network. Also explain the architecture. [8] OR Q4) a) Explain the training algorithm used in MRIII (Madaline Rule III). Also explain the minimum disturbance principle in context to Madaline networks. [8] b) Define Radially symmetric function. Give some examples and sketch them. Give the steps used in training a RBF network. [8] Q5) a) Explain Simulated Annealing in detail. Clearly explain its role in escaping local minima. [6] b) Explain the algorithm used in training Boltzmann machine. [6] c) Explain stochastic update in short. [4] OR Q6) a) Design a Hopfield network for 4 bit bipolar patterns. The training patterns are S1 = [1 1 1 1] S2 = [ 1 1 1 1] S3 = [ 1 1 1 1] Find the weight matrix & energy associated with sample S1. [8] b) Explain the architecture of Hopfield model of neural network. Also give the training algorithm for the same. [8] SECTION - II Q7) a) Cluster the following data samples using k-means algorithm. x1 (1, 1.2) x2 (1.5, 2) x3 (1.6, 3.2) x4 (0, 1) x5 (2.1, 3) x6 (2.8, 0) x7 (0.5, 3.5) x8 (0.9, 2.5) x9 (3, 1.1) x10 (2, 2.8) Assume initial number of clusters K = 3 with representative vectors / centroids C1 (1, 1.2) C2 (2.1, 3) and C3 (3, 1.1). Use Euclidean distance to cluster all ten samples. Tabulate the distances of all the samples from C1, C2 and C3 for 1 epoch. After l epoch is done list the samples belonging to each clusters C1, C2 and C3 and also find the new centroids. [12] b) Find out the winner using Maxnet with four neurons and inhibitory weights = 0.25. When initial activation are [6] a1 (0) = 0.1, a2 (0) = 0.6, a3 (0) = 0.3, a4 (0) = 0.5 [3664]-201 3 OR Q8) a) Explain the learning algorithm for kohonen s topology preserving network (SOM). [6] b) Explain the architecture of Hamming network. Give the steps used to find out hamming distance using Hamming network. [6] c) Explain LVQ1 (Linear Vector Quantizer 1) algorithm. Q9) a) b) [6] Design TAM to associate the characters as shown in Fig 2a and 2b. Assume A = [1 1 1 1] N = [ 1 1 1 1] N1 = [ 1 1 1 1] [8] Explain the following : i) Pattern Association. ii) Pattern Classification. iii) Pattern Clustering. iv) Pattern Mapping. [8] OR Q10)a) Explain the following terms in context to autoassociative memory i) ii) Encoding iii) b) Storage Retrieval iv) Performance. [8] Explain the need to define energy function for associative networks. State and explain BAM energy theorem. [8] [3664]-201 4 Q11)a) Explain how neural network principles are useful in control system applications. [8] b) How neural network can be used in the problem of hand written digit recognition. [8] OR Q12)a) Explain the difficulties involved in getting the solution of Travelling Salesman problem. How a Hopfield network can be used in this [8] application. b) Explain the steps in the solution of a general optimization problem by a neural network. [8] kbkb [3664]-201 5

Formatting page ...

Formatting page ...

Formatting page ...

Formatting page ...

 

  Print intermediate debugging step

Show debugging info


 


Tags : Pune, Engineering, University of Pune, Engineering question papers, Pune University, previous year question papers, question papers, india, model question paper, pune university paper pattern, pune university syllabus, old question papers  

© 2010 - 2025 ResPaper. Terms of ServiceContact Us Advertise with us

 

pune_eng chat