The research object - the process of learning and functioning multilayered feedforward neural network. The research target - development of methods and models for speed up training and functioning of the multilayered feedforward neural network, which due to scheduling and scaling of the computing system, and the adaptation and productivity increase allows to essential reduce time of the large dimension tasks solving. Methods of the research: for equable neural data distribution the theory of artificial neural systems, the linear algebra, algorithmic methods, computing methods are used, for model of estimation the parametrical modeling, the system analysis are applied, to adaptation and speed up of neurodata processing - the graph theory, the matrixes theory, principles of the computer networks organization, for computing system scaling - the theoretical bases of high performance systems, the theory of the parallel and distributed calculations, to confirm the efficiency of the results - the imitating modeling with use of high level programming languages. The scientific novelty: 1. The first method of equable neural data distribution, based on dynamic neurons sets assigning between processors depending on the amount of input data that allows to essential reduce time of training and functioning of a multilayered neural network and to decrease by an order computing complexity in comparison with existing consecutive methods is offered. 2. The first model of multilayered neural network training and functioning speedup estimation, which is characterized by a choice of effective values, taking into account volume and distribution of input information depending on data transmission in virtual topology ("star", "grid", "graph") and hardware characteristics of the environment and processors, that allows to increase productivity of neural processing and considerably speedup solving of large dimensional data in the distributed computing environment is offered. 3. The first method of computing system scaling, which is characterized by exact definition of distributed between processors neural data performance time that allows effectively scheduling resources and to estimate productivity of further capacity increase of the heterogeneous or homogeneous computing environment for acceleration of the tasks is offered. 4. Further developed model for speed up data processing by the accounting of input data volume, virtual topology ("star", "grid", "graph") for reduction the number of transfers between processors that allows to adapt a multilayered neural network in the distributed computing environment for the speedup large scale tasks solution. The degree of implementation - methods and models for speed up neuronetwork data processing in the distributed computing environment are brought to level of program implementation that allowed to carry out: grading seamless tube of different purposes on quality in SPC LLC "Technology" Kharkov, Ukraine (the act from 18.05.09) forecasting of an ecological situation in the buffer zone of PLC "ArcelorMittal Kryviy Rih " for LLC "ATOMECOSYSTEM" Kharkov, Ukraine (the act from of 30.05.11), results were incorporated in educational process of Kharkov National University of Radio Electronics (the act from 15.03.10). The scope of use - development of neuronetwork systems for speed up data processing for a wide variety tasks; in an intelligent systems processing essential volume of input information in various branches; in educational process at training of specialists in the technologies of the parallel and distributed computing and also neuronetwork data processing