ENFRN_GRN

Category Cross-Omics>Pathway Analysis/Gene Regulatory Networks/Tools

Abstract ENFRN_GRN is a novel multi-layer Evolutionary Trained Neuro- Fuzzy Recurrent Network (ENFRN) applied to the problem of Gene Regulatory Network (GRN) reconstruction, which addresses the major drawbacks of currently existing computational methods.

The manufacturer’s choice was driven by the benefits, in terms of computational power, that neural network (NN) based methods provide.

The self-organized nature of the ENFRN algorithm is its ability to produce an adaptive number of temporal fuzzy rules that describe the relationships between the input (regulating) genes and the output (regulated) gene.

Related to that, another advantage of the manufacturers approach is that it overcomes the need of prior data discretization, a characteristic of many computational methods which often leads to information loss.

The dynamic mapping capabilities emerging from the recurrent structure of ENFRN and the incorporation of fuzzy logic drive the construction of easily interpretable fuzzy rules of the form:

“IF gene x is highly expressed at time t THEN its dependent/target gene y will be lowly expressed at time t+1”.

The evolutionary training, based on the Particle Swarm Optimization (PSO) framework, tries to avoid the drawbacks of classical neural network training algorithms.

Additionally, the manufacturers are approaching the under-determinism problem by selecting the most suitable set of regulatory genes via a time- effective procedure embedded in the construction phase of ENFRN.

Also, besides determining the regulatory relations among genes, the manufacturer’s method can determine the type of the regulation (activation or repression) and at the same time assign a score, which can be used as a measure of confidence in the retrieved regulation.

Experiments on real data sets derived from microarray experiments on Saccharomyces Cerevisiae prove the ability of this method to capture biologically established gene regulatory interactions, outperforming at the same time other computational methods.

MATLAB toolbox (ENFRN_GRN 1.0) --

All the algorithms and methods have been implemented in a MATLAB toolbox (ENFRN_GRN 1.0) that includes functions for:

1) ENFRN initial structure creation;

2) Optimization, using Binary PSO (Particle Swarm Optimization algorithm);

3) Training (utilizing PSO); and

4) Gene Regulatory Network reconstruction from 'gene expression' time- series data.

The Evolutionary Recurrent Neuro Fuzzy Network process --

Architectural structure - Unlike other ‘neuro-fuzzy network’ architectures, where the network structure is fixed and the rules should be assigned a priori, there are ‘No fuzzy rules’ initially in this architecture; they are constructed during learning, in a self organized manner. The two learning phases (the ‘structure’ and ‘parameter learning’) are used to accomplish this task.

The ‘structure learning’ phase is responsible for the generation of fuzzy IF- THEN rules as well as the judgment of the feedback configuration, and the ‘parameter learning’ phase for tuning the free parameters of each dynamic rule (such as the shapes and positions of membership functions).

These are accomplished through repeated training on the input-output patterns. The way the input space is partitioned determines the number of rules.

Partitioning of input and output space - The creation of a ‘new rule’ within ENFRN corresponds to the creation of a new cluster in the input space. Therefore the way the input space is partitioned (clustered) determines the number of ‘fuzzy rules’ created.

Thus, the number of rules created by ENFRN is problem dependent, the more complex a problem is the greater becomes the number of rules.

ENFRN has to determine if a new output cluster must be created. A cluster in the output space is the consequent of a rule. One of the characteristics of ENFRN is that more than one rule can be connected to the same consequent.

As a result, the creation of a cluster in the input space does Not necessarily mean a subsequent creation of a cluster in the output space.

It depends on the incoming pattern, since the newly created cluster in the input space could be connected to an already existing cluster of the output space. The model decides whether or Not to create a new output cluster.

PSO for ENFRN Optimization and Learning - After the creation of the initial structure described above, ENFRN enters a two (2) phase learning process where the initial structure is optimized and the various parameters (e.g. centers and widths of the ‘fuzzy sets’ as well as link weights among connections) are fine tuned. Both phases of this process are based on PSO.

Structure Optimization - At this phase of the learning algorithm of ENFRN the manufacturer's have developed a scheme for deleting some of the rules along with their corresponding output nodes (in cases where they are Not connected to other rule nodes), if they are either redundant or the patterns they describe could be efficiently represented by other rules.

Therefore, the objective of this part of the ‘learning process’ has a dual goal of decreasing the redundancy and simplifying the model.

For the optimization of the ENFRN structure the manufacturers have employed a discrete version of the PSO algorithm.

Parameters Learning - The key idea of using PSO in this final phase of the training process is to fine tune the centres and widths for the ‘fuzzy sets’ comprising the rule and output nodes, as well as the weights assigned to the recurrent links, so as to have the minimum number of errors in the prediction of the model.

ENFRN reconstructs GRNs --

From ENFRN Structure to Regulation Type - A set of fuzzy rules arriving from a certain ENFRN structure is employed to determine the regulation type (i.e. up, down, or even No regulation) that the input imposes to the output based on a given data set.

Each of the ENFRN rules has a prerequisite part consisting of a number of fuzzy sets (matching the number of input variables).

These fuzzy sets correspond to linguistic labels describing the values of the input variable. In this case the linguistic labels correspond to a certain ‘expression level’ of the input gene (e.g. high, medium-high, medium, etc). The same applies for the output.

Determining potential regulators - Within the methodology that the manufacturer follows to reconstruct GRNs they have developed a procedure that identifies sets of potential gene regulators by making use of the ENFRN resources.

The key idea of the procedure that the manufacturer follows is to take advantage of the computational efficient first phase of the ‘ENFRN learning’ to make an initial coarse selection of possible regulators for a specific gene, out of the whole set of genes present in a specific dataset.

Later, the selected genes will be thoroughly checked using the remaining optimization phases of the ENFRN learning process to conclude the best regulators.

System Requirements

Contact manufacturer.

Manufacturer

Manufacturer Web Site ENFRN_GRN

Price Contact manufacturer.

G6G Abstract Number 20586

G6G Manufacturer Number 104189