Information

21: Regulatory Networks- Inference, Analysis, Application - Biology

21: Regulatory Networks- Inference, Analysis, Application - Biology



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

21: Regulatory Networks- Inference, Analysis, Application

Inferring latent temporal progression and regulatory networks from cross-sectional transcriptomic data of cancer samples

Unraveling molecular regulatory networks underlying disease progression is critically important for understanding disease mechanisms and identifying drug targets. The existing methods for inferring gene regulatory networks (GRNs) rely mainly on time-course gene expression data. However, most available omics data from cross-sectional studies of cancer patients often lack sufficient temporal information, leading to a key challenge for GRN inference. Through quantifying the latent progression using random walks-based manifold distance, we propose a latent-temporal progression-based Bayesian method, PROB, for inferring GRNs from the cross-sectional transcriptomic data of tumor samples. The robustness of PROB to the measurement variabilities in the data is mathematically proved and numerically verified. Performance evaluation on real data indicates that PROB outperforms other methods in both pseudotime inference and GRN inference. Applications to bladder cancer and breast cancer demonstrate that our method is effective to identify key regulators of cancer progression or drug targets. The identified ACSS1 is experimentally validated to promote epithelial-to-mesenchymal transition of bladder cancer cells, and the predicted FOXM1-targets interactions are verified and are predictive of relapse in breast cancer. Our study suggests new effective ways to clinical transcriptomic data modeling for characterizing cancer progression and facilitates the translation of regulatory network-based approaches into precision medicine.

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1. Illustration of the PROB framework…

Fig 1. Illustration of the PROB framework for inferring the causal gene regulatory network from…

Fig 2. Demonstrating robustness of PROB using…

Fig 2. Demonstrating robustness of PROB using synthetic datasets at different levels of variabilities.

Fig 3. Comparison of PROB with other…

Fig 3. Comparison of PROB with other existing pseudotime inference methods and GRN inference methods…

Fig 4. Reconstructing EMT regulatory networks during…

Fig 4. Reconstructing EMT regulatory networks during bladder cancer progression.

( a ) Expression patterns…

Fig 5. Experimental validation of the predicted…

Fig 5. Experimental validation of the predicted role of ACSS1 in EMT of bladder cancer.

Fig 6. FOXM1 was revealed as a…

Fig 6. FOXM1 was revealed as a key gene underlying breast cancer progression by PROB.


SCENIC: single-cell regulatory network inference and clustering

We present SCENIC, a computational method for simultaneous gene regulatory network reconstruction and cell-state identification from single-cell RNA-seq data (http://scenic.aertslab.org). On a compendium of single-cell data from tumors and brain, we demonstrate that cis-regulatory analysis can be exploited to guide the identification of transcription factors and cell states. SCENIC provides critical biological insights into the mechanisms driving cellular heterogeneity.

Conflict of interest statement

Competing financial interests

The authors declare no competing financial interests.

Figures

Figure 1. The SCENIC workflow and its…

Figure 1. The SCENIC workflow and its application to the mouse brain.

Figure 2. Cross-species comparison of neuronal networks…

Figure 2. Cross-species comparison of neuronal networks and cell types.

Figure 3. SCENIC overcomes tumor effects and…

Figure 3. SCENIC overcomes tumor effects and unravels relevant cell states and GRNs in cancer.


Results

Formulation of the proposed method

LASSO is a linear regression that penalizes the sum of absolute value of the regression coefficients 49 . It relies on combining the L2-norm, i.e., the residual sum of squares, with the L1-norm of the regression coefficients, which amounts to sparsity due to shrinking the coefficients towards zero. Moreover, the proposal for considering fusion in the classical LASSO formulation was intended to address problems with a meaningful order in the considered features (i.e., regressors). In fused LASSO, minimization of the L1-norm is imposed not only on the (ordered) regression coefficients, like in LASSO, but also on the consecutive differences of regression coefficients based on the assumed order of corresponding regressors 50 .

In this section we formulate the approach which extends the concept of fused LASSO by incorporating information about the similarity of differential behavior for the response and regressor genes. Here, every gene is taken as response and the genes which code for transcription factors (known as TF genes) are used as regressors. The fusion imposes the similarity constraint on the obtained networks from each data set. In this way, the approach ensures sparsity, commonly observed in gene regulatory networks, with biologically meaningful data-driven evidence for the inferred relationships. In addition, we introduce the experimental scenario and type of data on which the approach is readily applicable.

Experimental scenario and weight matrices

For consistency throughout the text, all superscripts refer to the experiments/conditions and subscripts denote genes. We will define the approach for P genes used as regressors and a single gene used as a response. The approach assumes that k gene expression data sets, denoted by X i , are gathered under k different conditions alongside corresponding reference (control) data sets X c,i , 1 ≤ ik, all of them containing the expression levels of P genes over N i , 1 ≤ ik, time-points or perturbations (Fig. 1, step 1 and 2). Note that the data sets from the k different conditions do not have to be over the same time domain or with the same sampling frequency the only requirement is that each data set allows to reliably obtain the corresponding covariance matrix for the P genes. In addition, the profiles Y i for the single response gene over the k conditions together with the corresponding control profiles Y c,i , 1 ≤ ik are given.

Representation of the analysis flow, data sets and their transformation for usage in the model.

Representation of condition and control data sets X i and X control , respectively, containing the expression levels of P genes, as regressors, over N i time points, 1 ≤ ik. Block matrix X contains the gene expression values X i , 1 ≤ ik, for the regressor genes (transcription factors) in the diagonal. Y includes the profiles Y i , 1 ≤ ik, for the response genes. Weight matrices W i contain the similarity of differential behavior between the response gene and each of the regressors based on data sets from k perturbation experiments and the corresponding controls. Regression coefficient values β i , 1 ≤ ik, consists of regulatory relationship between a TF and its targets from k perturbation experiments.

Since reference data sets X c,i , 1 ≤ ik, are available, we first determine the differential behavior for each of the regressors as well as for the response in each condition. To this end, we rely on the gene-specific B-statistic 11 corresponding to the log-odds that the gene is differentially expressed at a given time point of a particular condition in comparison to the control. Let , 1 ≤ jP, 1 ≤ ik, 1 ≤ tN i be the probability that gene j at time point t under condition i is differentially expressed. Then the matrix Pr i of dimension gathers the probabilities for the time-dependent differential behavior of the considered genes, estimated by the corresponding B-statistics. The B-statistics was estimated for each gene and at each time point by comparing data set X i from the treatment and the respective control X c,i , 1 ≤ ik.

The derived probabilities can be used to define weight matrices for each condition i (1 ≤ ik), denoted by , which capture the information on the similarities between the response gene and each of the regressor genes based on their differential behavior (Fig. 1, step 3), by using the following:

where and are the probabilities of differential behavior for the response gene Y and the regressor gene j (1 ≤ jP), respectively, at time point/perturbation t.

If the value of is close to zero, the j th regressor gene has, on average, a similar differential behavior to the response over all time points/perturbations in a data set from one condition. More specifically, is close to zero if either both genes are differentially expressed or they are both not affected by the perturbation. Due to the symmetric nature of the absolute value in Eq. (1), the weight matrix W i is symmetric with zeros along the diagonal.

Regression-based model

Having k time-resolved gene expression data sets including P + 1 genes and N i , 1 ≤ ik, time points together with the corresponding control data set(s), we aim at formulating a model which captures the following three criteria: (1) the expression levels of each gene should be explained by the expression levels of a small number of transcription factor (TF) coding genes, i.e., the corresponding regression should be sparse to reduce the number of false positives and likely increase the detection of direct relationships (2) the regulatory networks should be inferred simultaneously over the given k data sets to simultaneously explain all analyzed conditions and (3) a direct link should be preferred for genes which exhibit similar differential behavior over the considered conditions, since a differentially behaving gene (TF) is likely to alter the behavior of a direct target. Here, for the purpose of building the regression models, we focus on using the data sets from the k conditions only, as they are potentially more informative about the responsive genes in comparison to the reference state. In addition, the control data set enter the modeling through the probabilities of differential behavior and are, thus, also employed in the reconstruction of the gene regulatory network.

The first criterion can be captured by the classical LASSO, so that the regressors with non-zero regression coefficients are considered to be involved in a direct relationship with the response gene, yielding the gene regulatory network. To address the second criterion of simultaneously inferring k networks from the k different data sets, one has to ensure that: (i) the integration and transformation of the data should be performed in such a way that the LASSO penalty is simultaneously imposed to all data sets and (ii) the k reconstructed networks should be as close as possible in terms of edges (i.e., relationships) their strength given by the coefficients and the sign of the relationship (activating or repressing).

The integration and transformation of the data sets is achieved as follows: the k transcriptomics data sets are combined into a single block matrix X of dimensions kN × kP, where , Y is a collection of the profiles for the response gene from the different conditions, resulting in a vector of dimension kN × 1. As a result, the estimated coefficients form a kP × 1-vector corresponding to the edges between the response gene Y and regressor genes in the k network. The vector of estimated regression coefficients is represented with β.

To ensure that the k reconstructed networks are as close as possible, we add the fusion LASSO term , where β′ = [β 1 , β 2 , …, β k − 1 ] T , β′ = [β 2 , β 3 , …, β k ] T and the order of the k data sets is arbitrarily selected. In a regression setting, this fusion term imposes the constraint that the sum of absolute differences between the estimated coefficients of the same regressor over the consecutive data sets, with the assumed arbitrary order, is minimized (Supplementary Fig. S3 and Supplementary Table S8). This idea differs from the most recent approach whereby networks reconstructed from different data sets are individually obtained and later combined in a consensus network by existing techniques 14,51 .

The third criterion about the similarity of differential behavior between the response and regressors implies the inclusion of the weight matrix W i , 1 ≤ ik, so that the regressor of higher explanatory power, which are associated with non-zero regression coefficients, over the multiple data sets are less penalized in the fused LASSO regression. This can be achieved by multiplying the regression coefficients with the weight matrix W (of size kP × kP), gathering the similarity of differential behavior between each of the P regressors and the single response gene under the k conditions. As a result, the expression is included as a modified penalty in the regression.

Therefore, the final model for reconstructing the gene regulatory interactions is given by the following fusion LASSO formulation over the k given data sets (Fig. 1, step 4):

where Y is the response gene which is regulated by the regressors with non-zero regression coefficients.

Gene expression data sets

Escherichia coli

The responses of Escherichia coli to stress conditions have already been well-investigated, resulting in characterization of the general and condition-specific components that regulate transcriptional changes underlying the adjustment to changing environments 52,53 . The gathered data sets provide an excellent test case to which the performance of the proposed method and the contending alternatives can be readily compared.

The time-resolved transcriptomics data sets gathered with microarray technology were obtained from 54 , where changes in the expression of genes from E. coli strain MG1655 were monitored under four stress conditions, including: non-lethal temperature shifts, i.e., heat and cold treatment, oxidative stress (by adding hydrogen peroxide), lactose-diauxic shift (i.e., change of primary carbon source) relative to cultures grown under optimal conditions, referred to as control. All cultures were grown simultaneously in the same condition and different perturbations were applied at the early mid-log phase (OD 0.6). The sampling was carried out from time points 10–50 min post-perturbation (at 10 min intervals) and two control time points before each perturbation for all considered conditions (data are available at http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE20305).

The reasons for selecting and using these data sets are that the predictions can be readily validated against the gold standard benchmark for E. coli provided by the DREAM5 challenge 14 as well as the experimentally verified regulatory network interactions from RegulonDB 53 . In RegulonDB, the information about the effect of the regulators (in terms of activating and repressing interactions) on the target genes is also provided. Moreover, these data sets satisfy the requirements of the proposed approach, since the single control can be used to establish the differential gene behavior across the data sets upon application of the stresses. In addition, the data are gathered from the same laboratory following the same protocol, thus, reducing the level of noise. Finally and most importantly, we aimed at using real-world data sets to estimate the actual performance of the proposed method in a realistic setting rather than from simulated instances.

Mycobacterium tuberculosis

Mycobacterium tuberculosis (MTB) is a pathogenic bacterium whose gene regulatory network is poorly understood. However, recently Galagan et al. 55 made the first step in reconstructing the complete regulatory network of MTB based on the ChIP-seq and microarray gene expression data gathered under conditions of hypoxia and re-aeration. They performed time-resolved transcriptomics experiments in which the expression level of genes were measured at 1, 2, 3, 5 and 7 days after culturing MTB strain H37Rv in bacteriostatic hypoxia conditions. The samples were then placed back in the aerobic rolling culture and the gene expression levels were measured after 1, 2, 4, 5 and 7 days of re-aeration. The expression levels were also measured at time point 0 which is considered as the control data set for the differential analysis (data are available at http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE43466).

The reasons for selecting and using these data sets are that: (i) the predictions can be validated with respect to recent real-world data sets from non-model species, (ii) it satisfies the requirements of our approach for inferring the gene regulatory network from multiple data sets, as the gathered time series can be considered as two separate time-resolved data sets (conditions): hypoxia and re-aeration while including time point 0 as a control and (iii) the predictions can be readily compared to the small sub-network of MTB which was partially verified through experiments in 55 (Fig. 2 in 55 ).

Mus musculus

Mus musculus represents a model organism for understanding human biology and disease therefore, studying its large-scale gene regulatory network is an important challenging task. Many experiments have been performed on mouse embryonic stem (ES) cells to explore the details of their pluripotency and ability to self-propagate and renew. To this end, Sene et al. 56 , measured the expression level of genes from three genetically distinct mouse ES cell lines (R1, J1 and V6.5) during differentiation at 11 time points: 0 h, 6 h, 12 h, 18 h, 24 h, 36 h, 48 h, 4 d, 7 d, 9 d and 14 d. The expression levels at time point 0 are considered as the control data set for the differential analysis (the processed data has been downloaded from http://www.maayanlab.net/ESCAPE/browse.php).

This data set provides a good test case for comparing the performance of the compared approaches for gene regulatory network extraction in a higher organism. Moreover, the literature-based embryonic stem cell network is used as a global standard network to validate the predictions obtained from the compared approaches (downloaded from iScMiD (Integrated Stem Cell Molecular Interactions Database) http://amp.pharm.mssm.edu/iscmid/literature/index.htm).

Comparative analysis

For the comparative analysis we considered the most recent as well as the state-of-the-art approaches which include two criteria, sparsity of the network and removal of indirect relationships. To this end, we used the following approaches: the global silencing 33 , network deconvolution 34 , Gaussian Graphical Models (GGM) 19 , mutual information (ARACNE 25 and CLR 26 ), Bayesian Network (catnet) 57 , the GENIE3 approach 21 and different regularization-based models 45 (more details in the Methods section).

Escherichia coli

The resulting networks from these approaches were compared to the network including experimentally verified regulatory relationships from RegulonDB and the gold standards from the DREAM5 challenge. The findings were summarized in terms of the resulting true positive and false positive rates, which resulted in the corresponding ROC (receiver operating characteristic) curves illustrated in Fig. 2 (based on the used threshold value for retaining a weighted edge in the network). We used the R package minet 58 to plot the ROC curves and the statistics obtained by using this package are summarized in Table 1. As illustrated in Fig. 2, the proposed approach allowed the prediction of regulatory relationships at a higher percentage of true positives in comparison to the GENIE3, network deconvolution and the global silencing methods, while GGM, ARACNE, CLR and regularization-based models performed poorly for accurate inference of the regulatory network. Moreover, it is evident that, on the used data sets, the proposed method based on the extension of the fused LASSO performed similarly to the considered contending approaches when only the connectivity (i.e., presence of an edge/relationship) is considered. To further quantify the performance of the compared methods, we calculated the area under the ROC curves (AUROCs), the area under PR (precision-recall) curves (AUPRs) and the true positive rates (TPR) at low false positive rate (most of the compared methods showed maximum difference between TPR and FPR at FPR = 0.03), presented in Table 1. We used the R package pROC 59 to estimate the AUROC curves and their respective confidence intervals (CI) (Supplementary Table S1). The other statistics resulting from the comparison of the considered approaches are summarized in Supplementary Table S1. Altogether, our findings demonstrate that the performance of the proposed approach is statistically similar to the performance of GENIE3 and network deconvolution, while outperforming the remaining of contending methods.

ROC curves for the methods considered in the comparative analysis (E. coli data sets).

ROC curves are shown for the gene regulatory networks predicted based on the data sets from (A) cold, (B) heat and (C) oxidative stress as well as (D) lactose-diauxic shift in addition to the (E) combination of all four data sets by using the following methods: GGM, ARACNE, CLR, L1/2, GENIE3, Global silencing, Network deconvolution and the proposed approach based on the fused LASSO which simultaneously considers all four data sets. Due to the high similarity in the performance of the regularization-based models, the ROC curves of L1/2 model is illustrated as the representative to avoid curve overlapping. The color of the dashed lines represents the methods. TPR and FPR stand for true positive rate and false positive rate, respectively.

In our approach, the network over all data sets was obtained by taking the maximum or the average of the corresponding regression coefficients (see Supplementary Fig. S1 (a) for illustration). Since the distribution of the differences between the regression coefficients obtained based on each of the four data sets for the same response and regressors are almost identical (see Supplementary Fig. S1 (b)), the two networks are not expected to differ. Indeed, the comparison between the two networks obtained by taking the average and the maximum of the regression coefficients does not show a significant difference with respect to the AUROC values (p-value = 0.6257, see Supplementary Table S1).

The performance of each method over the four data sets and their combination was summarized by the geometric mean for the five corresponding AUROCs and AUPRs, while the OverallScore is the average of the calculated AUROCscores and AUPRscores 14 :

The OverallScore in Eq. (3) can be used to rank the compared methods, such that a larger value corresponds to a better-performing method. As shown in Table 1, the performance of our method was superior to that of other contending methods based on the OverallScore. Moreover, the standard deviation of the AUROC curves for the proposed method is smaller in comparison to the contenders (see Supplementary Table S1). The stability of the obtained networks further supports the usage of the fusion term in the proposed approach, which was considered for the very purpose of producing similar networks across different data sets.

We next determined the smallest normalized weight at which the first true positive edge can be detected—termed selector value selector values closer to 1 indicate that the true positive edges are likely assigned higher weights. As shown in Table 2, the proposed method performed the best for networks obtained over all data sets. Moreover, our approach is in the two best performing methods when considering individual data sets. The discrepancy can be explained by the inclusion of the fusion term which insists on context-independent regulatory relationships. Since the edge weights of the networks obtained by ARACNE with each of the four data sets were identical, the corresponding selector values are not informative and cannot be used in the comparison. Interestingly, the network deconvolution and the global silencing did not perform well with respect to the selector value, despite recent claims on synthetic data. The reasonably high selector values obtained from regularization-based models further confirm the power of regularization-based approaches at assigning higher scores to the true positive edges.

Furthermore, the accuracy of inferred gene regulatory networks depends on the ability to predict the type of the regulatory relationships—activating or repressing. Therefore, we compared the performance of the methods regarding the type of regulatory relationships predicted. To this end, we used the network from RegulonDB which includes in total 4566 relationships. Table 3 summarizes the percentages of the predicted true activating and repressing regulatory interactions for the four data sets and the considered methods. In addition, it includes the percentages of the true positive and true negative relationships predicted by different methods, irrespective of the type of regulation.

ARACNE and CLR as well as GENIE3 are not able to infer the type of regulatory relationships. As mentioned in Section 3.2, the original algorithms for the global silencing and the network deconvolution do not provide the sign of interactions, since they aim at ranking the regulatory interactions. As shown in Table 3, our findings indicate that the performance of the proposed method with respect to the prediction of the type of regulatory relationships is promising. In addition, GGM as well as regularization-based models resulted in the largest fraction of true negative edges as well as the smallest fraction of true positive edges. The deconvolution method, however, outperformed the rest of the approaches with respect to the fraction of true positive edges for cold and oxidative stress as well as lactose-diauxic shift (albeit with much smaller selector values, see Table 2).

In addition, to specifically inspect the efficiency of the inferred networks, we selected a subnetwork from RegulonDB including four sigma factors, RopD, RopE, RopH and RopS, included in the analyzed data sets. Sigma factors are proteins required for initiation of RNA synthesis 60 and their activities depend on the environmental conditions. While RopD is the primary sigma factor which transcribes most of the genes, the activities of the other three are environment-specific for instance, RopE is the extreme heat stress sigma factor, RopH is the heat stress sigma factor activated upon heat exposure and RopS is the stationary phase sigma factor. Interestingly, for the four sigma factors, no edges were predicted by GGM. With respect to the six other approaches as well as catnet (applied only on genes included in the illustrated networks), by inspecting Fig. 3 (for the sake of readability, the resulting networks for CLR, GENIE3 , , and -regularization models are shown in the Supplementary Fig. S2), it becomes evident that the proposed approach predicted a considerably smaller number of false positives, due to the sparsity constraints and a comparable number of true positives, due to the regression-based formulation. In addition, as a result of the imposed constraints in the fused LASSO formulation, the networks extracted for the four different data sets based on our method are exactly the same (except for slight differences in the weights - see Supplementary Fig. S1), which is biologically expected. However, this was not the case for the networks reconstructed by ARACNE, CLR, GENIE3, catnet, the network deconvolution and the global silencing methods in addition, the networks from GENIE3, catnet and the deconvolution method were the densest over all data sets. Altogether, regularization-based models largely underestimated the true positive edges, although they were consistent in predicting the edges for each single data set. The proposed approach performed particularly well with respect to predicting the type and not only the presence of a regulatory relationship, evident in the cases of the RpoH and RpoD sigma factors.

Sub-networks including sigma factors.

The gene regulatory network for four sigma factors, RopD, RopE, RopH and RopS, together with their experimentally verified interactions obtained from RegulonDB 53 . The colored edges belong to the sub-network retrieved from RegulonDB, where red edges denote activating, while blue edges indicate repressing regulatory relationships. The edges marked in green are of unspecified regulatory type. If an edge was predicted by a method but is not included in the network from RegulonDB, it is colored in black. The predicted edges for ARANCE, catnet, the network deconvolution and global silencing methods are marked by ‘A’, ‘C’, ‘D’ and ‘S’, respectively, next to the corresponding edges. The letters are color-coded—red, blue or green fonts represent activating, repressing or unspecified relationships, respectively. The dotted edges denote the relationships predicted by the proposed approach. Illustrated are the predicted regulatory relationships and their types based on data from (A) cold, (B) heat, (C) oxidative stress and (D) lactose-diauxic shift time-series experiments.

Finally, we obtained the run time of each inference method which is presented in Supplementary Table S2. Since the proposed method includes many independent regression models, we parallelized the approach (as described in Methods). Therefore, the run time of the proposed method is only provided for performing a single regression. Clearly, the algorithms which rely on matrix inversions in the presence of hidden parameters (e.g., network deconvolution) are faster in comparison to the proposed method which requires solving multiple regressions. While cross validation is expected to increase the computational demand, if a single value is used across all genes (i.e., models) the regulatory interaction in the vicinity of a gene can be inferred in a drastically reduced time in order of a few seconds.

Mycobacterium tuberculosis

Motivated by the predictions from applying the proposed approach on the combination of all E. coli data sets, we next investigated the comparative analysis only on the combination of both data sets from MTB (i.e., hypoxia and re-aeration). To obtain gene regulatory networks, we applied all compared methods to the pre-processed gene expression values as well as the log 2-transformed gene expression fold changes between control (time point 0) and the hypoxia and re-aeration time-series samples. The ROC analysis for the compared methods was then obtained by using the R package minet 58 for both: the top 31 (as the gold standard network includes 31 edges) and 100 highly ranked edges and the corresponding statistics are summarized in Supplementary Table S3. It is evident that the proposed approach allowed the prediction of regulatory relationships at a higher percentage of true positives in both cases of using gene expression values and log 2-transformed gene expression fold changes when comparing to the performance of GENIE3, network deconvolution and global silencing methods. Likewise with E. coli, GGM, ARACNE, CLR and regularization-based models performed poorly for accurate inference of the regulatory network however, the performance of the majority of the compared methods was increased upon applying to the log 2-transformed gene expression fold changes. In addition, the selector values for all compared methods were the same and equal to one.

Furthermore, to specifically inspect the efficiency of the inferred networks, we investigated on three experimentally verified TF regulatory interactions that are highlighted in the Result section from Galagan et al. (page 180, 55 ): TF Rv0081 negatively regulates TFs Rv3597c and Rv3416 (whiB3) (Rv0081 → Rv3597c (Lsr2), Rv0081 → Rv3416 (whiB3)) and TF Rv3133c and Rv2034 (DosR) negatively regulate each other (Rv3133c ↔ Rv2034 (DosR)). All interactions and their regulatory types were successfully predicted by the proposed approach with sufficiently high ranks (Supplementary Table S4). Global silencing as well as network deconvolution methods were not successful with respect to the type of regulatory interactions and obtained rather low ranks, while GENIE3 predicted all interactions with high ranks but is not able to infer the type of regulatory relationships. Likewise, GGM, ARACNE and CLR were not successful in predicting the interactions, while regularization-based models performed very inconsistent with respect to the sign and predicting true edges.

Moreover, we counted the number of predicted true interactions in the top 100 highly ranked edges (Supplementary Tables S4, S5 and S6) considering the gold standard sub-network which includes 31 interactions (Fig. 2 in 55 ). It is evident that the proposed approach obtained the highest overlap with the gold standard while the minimum rank of the intersected interactions is remarkably high (above 0.6). Finally, the networks predicted by the compared approaches were filtered by the corresponding median edge ranks and the ROC analysis has been performed to the resulting sub-networks. Here, too, our proposed approach resulted in the highest AUROC and AUPR values while the edges in the filtered sub-network have considerably higher ranks (above 0.6075 and 0.379 when applied to gene expression levels and log 2-transformed gene expression fold changes, respectively) in comparison to the median ranks of the other contending methods.

Mus musculus

Finally, to verify the performance of the proposed approach in a higher organism, we performed a comparative analysis on the combination of tissue-specific time-series data sets from three genetically distinct mouse ES cell lines during differentiation. We applied all compared methods to the log 2-transformed gene expression fold changes between control (time point 0) and the rest of time-series from the corresponding ES cell lines, motivated by the resulting improvement in the performance of the inference approaches applied to the MTB data sets. The ROC analysis for the compared methods was then carried out on the top 248 (as the gold standard network includes 248 edges) and 500 highly ranked edges and the corresponding statistics are summarized in Supplementary Table S7.

It is evident that the proposed approach allowed the prediction of regulatory relationships at a higher percentage of true positives when compared to the performance of GENIE3, network deconvolution and global silencing methods. Likewise with other data sets CLR as well as regularization models performed poorly for accurate inference of the regulatory network however, the performance of the GGM is increased in comparison to its performance on the other data sets, while ARACNE failed to predict any true positive edge. In addition, the selector values for most of the compared methods were high and close (or equal) to one.

Moreover, we counted the number of predicted true interactions in the top 248 highly ranked edges (Supplementary Tables S7 and S9) considering the gold standard sub-network (which includes 248 interactions). It is evident that the proposed approach obtained the highest overlap with the gold standard, while the minimum rank of the intersected interactions is remarkably high (above 0.5). Finally, the networks predicted by the compared approaches were filtered by the corresponding median edge ranks and the ROC analysis has been performed to the resulting sub-networks. Once again the proposed approach resulted in the highest AUROC and AUPR values, while the edges in the filtered sub-network have similar ranks as the other predicted sub-networks by the other contending methods.


Discussion

TRNs provide an important and popular framework for better understanding a cell’s regulatory mechanisms, leading to phenotypic conditions. However, to the best of our knowledge TRN reconstruction methods today do not incorporate phenotypic information adequately or at all. As such, the reconstructed networks may be limited in pinpointing regulatory mechanisms most related to a phenotype under investigation, and often necessitate a follow-up step that filters for phenotype relevance. For example, a recent study of gene expression changes underlying Huntington’s disease (HD) 73 reconstructed a TRN specific to the mouse striatum and then short-listed TFs whose predicted targets were enriched in genes differentially expressed in HD mouse models. In another study, gene expression profiles of TFs and putative target genes were used to reconstruct a context-restricted TRN for breast cancer (using only breast cancer samples), and then a list of breast cancer-relevant TFs (called “risk-TFs”) whose regulons were enriched in risk loci were short-listed 57 . In the aforementioned study 57 , GWAS and eQTL analyses were used to define risk loci and relate them to the regulon of each TF. Such previous attempts to augment TRN reconstruction with phenotypic data motivated us to develop a systematic approach to incorporate information about the phenotype directly into TRN reconstruction.

In this study, we developed InPheRNo to reconstruct phenotype-relevant TRNs and utilized it to identify regulatory interactions that differentiate one cancer type from others while correcting for the confounding effect of tissues of origin. InPheRNo is based on a carefully designed PGM, which is key to combining TF–gene expression correlations with gene–phenotype associations. The conditional distributions of the PGM model the summary statistics of gene–phenotype and TF–gene associations, providing a succinct and efficient approach for data integration to identify phenotype-relevant regulatory relationships. The method is broadly applicable since it learns regulatory relationships from expression data alone and does not impose any restriction on the type of phenotype under investigation—the phenotype may be binary, categorical or even continuous-valued, and any appropriate statistical method for testing its association with a gene’s expression may be used in InPheRNo. Unlike several other methods that rely on the regulatory relationship of one TF–gene pair at a time, InPheRNo considers the effect of multiple TFs on each gene in the reconstruction procedure, at the time of selecting candidate TFs as well as in training the PGM. Finally, using posterior probabilities obtained from the PGM, InPheRNo provides a score representing the confidence for the identified phenotype-relevant regulatory edges.

In designing InPheRNo’s pipeline, we made the choice to first perform a feature selection step (using Elastic Net) and only use the selected TFs in the PGM. First and foremost, this was done to reduce the computational complexity, both by reducing the number of candidate TFs and also by summarizing the expression profiles of genes and TFs using summary statistics. Several previous studies have successfully used summary statistics (and particularly p values) for similar reasons 23,28,29,30,31,32 . Second, modeling summary statistics instead of the full gene expression data enables integration of other regulatory evidence (captured through data types other than transcriptomic, if available) in the PGM with a relative ease.

One important consideration when using InPheRNo, is the number of samples. As InPheRNo is based on modeling of summary statistics obtained from gene–phenotype and gene–TF associations, similar requirements on the minimum number of samples for those analyses should be also considered here 74,75,76 . However, two features of InPheRNo enable it to handle a small number of samples better than traditional co-expression analysis. First, it utilizes Elastic Net (as part of the pipeline), whose regularization terms can overcome some limitations of the small sample size by imposing sparsity criterion. Second, as its PGM models the distribution of the p values instead of relying on whether such p value are significant or not (i.e., instead of thresholding them) it is more robust towards small samples sizes.

As there are no rigorously validated metazoan TRNs to benchmark against, we evaluated the predicted TRNs indirectly through key TFs and gene expression signatures derived from them, and showed clear improvement over several related strategies. Our results showed that the TFs with many cancer type-relevant targets are potential cancer driver TFs and may suggest novel drug targets or provide new insights, regarding the development and progress of cancer. Our results also suggest a powerful approach for subtyping of cancer patients using gene expression signatures: while most approaches developed for this task do not take into account the regulatory interactions among genes, our survival analysis suggests that cancer type-relevant TRNs can improve the predicting power of gene expression signatures.

In spite of the success of the InPheRNo-based gene signatures in differentiating between patients with poor and good prognosis for the majority of cancer types, in some cases, e.g., BRCA, this method did not result in groups with significantly different survival probability, despite the existence of BRCA-driver TFs in the signature. This lack of success may partially be owing to the fact that we clustered samples of each cancer type into two clusters, whereas these cancer types may include more than two subtypes, as is the case in BRCA 26 . However, since in most cancer types a definite number for the cancer subtypes is not yet established, we preferred to keep the number of clusters equal to two. A more in-depth analysis of subtype discovery and survival analysis using InPheRNo-derived TRNs is left for future work.

We would like to emphasize that in this study, we focused only on transcriptomic data, owing to the availability of this data type in many domains, including domains outside of cancer research, and lack of other important data types such as ChIP-seq data in these domains. Even in the area of cancer research, in which large databases of ChIP-seq tracks (such as ENCODE) corresponding to various cancer cell lines are available, the datasets are extremely biased toward a small fraction of well-studied TFs (for example only

10% of all TFs are studied in ENCODE). As a result, including these data sets may significantly bias the analysis towards this small fraction of TFs. In addition, matched gene expression and ChIP-seq data for tumor samples are rarely available and combining these data types from different sources and different samples, in itself a significant challenge, will require substantial effort in the future.

We believe that including additional types of regulatory evidence (especially those representing “cis” mechanisms such as TF motifs and chromatin state changes) in the phenotype-relevant TRN reconstruction procedure is an important and essential future direction for improving InPheRNo. This is especially true considering that many efforts are under way to generate large datasets containing matching transcriptomic, genomic, epigenomic and phenotypic profiles of patients 77,78,79 . One way to achieve this goal might be to include different regulatory evidence as new observed variables in the PGM used in InPheRNo. Another alternative is to use cis-regulatory evidences to construct an initial network that is used as a “prior” for Bayesian analysis of expression data, as has been demonstrated before 80 . Future investigations should focus on these avenues of integrating multi-omics data into the InPheRNo model.


Welcome!

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)


Mapping gene regulatory networks from single-cell omics data

Single-cell techniques are advancing rapidly and are yielding unprecedented insight into cellular heterogeneity. Mapping the gene regulatory networks (GRNs) underlying cell states provides attractive opportunities to mechanistically understand this heterogeneity. In this review, we discuss recently emerging methods to map GRNs from single-cell transcriptomics data, tackling the challenge of increased noise levels and data sparsity compared with bulk data, alongside increasing data volumes. Next, we discuss how new techniques for single-cell epigenomics, such as single-cell ATAC-seq and single-cell DNA methylation profiling, can be used to decipher gene regulatory programmes. We finally look forward to the application of single-cell multi-omics and perturbation techniques that will likely play important roles for GRN inference in the future.

Figures

Single-cell GRNs. The goal of…

Single-cell GRNs. The goal of many single-cell studies is to understand which cell…


6. How Many Gene Regulatory Networks Exist?

It is generally acknowledged that a phenotype is an emergent property of genotype-environment interactions. Specifically, a phenotype results from molecular and cellular activity patterns from genotype-environment interactions. This implies that each observable phenotype is associated with phenotype-specific gene networks, because without changing molecular interactions a phenotype cannot change this concept is illustrated in Figure 1. In this figure, gene networks can be seen as a bottleneck between the genotype and the phenotype with respect to their coupling. That means every change on the genotype level that will result in a change of the phenotype will also inevitably lead to a change in the gene network structure as mediator between both levels.

Figure 1. Schematic overview of the general role gene networks play in understanding phenotypes.

However, since gene networks refer to all possible types of molecular networks, including the transcriptional regulatory network, protein interaction network, metabolic network, gene regulatory network and interactions between these networks, it is less clear which of these networks, or all of them, are actually changed. Moreover, because a gene regulatory network can potentially represent many types of physical biochemical interactions among genes and gene products (de Matos Simoes et al., 2013a) it can be expected that gene regulatory networks are highly phenotype specific (Schadt, 2009 Emmert-Streib and Glazko, 2011). Establishing such relationships will therefore be a complex task, but also provides an opportunity to catalog phenotypes quantitatively. An example for the analysis of tissue-specific networks can be found in Guan et al. (2012) where 107 tissue specific network have been studied. Currently, the number of GRNs is difficult to estimate but based on these preliminary results one can hypothesize that there are more than 200 different GRNs for Human alone, because this corresponds about to the number of different cell types. However, also pathological cells manifesting tumors have their own characteristic networks (Emmert-Streib et al., 2014) implying that there are probably thousands of different gene networks in Human.


RMaNI: Regulatory Module Network Inference framework

Background: Cell survival and development are orchestrated by complex interlocking programs of gene activation and repression. Understanding how this gene regulatory network (GRN) functions in normal states, and is altered in cancers subtypes, offers fundamental insight into oncogenesis and disease progression, and holds great promise for guiding clinical decisions. Inferring a GRN from empirical microarray gene expression data is a challenging task in cancer systems biology. In recent years, module-based approaches for GRN inference have been proposed to address this challenge. Despite the demonstrated success of module-based approaches in uncovering biologically meaningful regulatory interactions, their application remains limited a single condition, without supporting the comparison of multiple disease subtypes/conditions. Also, their use remains unnecessarily restricted to computational biologists, as accurate inference of modules and their regulators requires integration of diverse tools and heterogeneous data sources, which in turn requires scripting skills, data infrastructure and powerful computational facilities. New analytical frameworks are required to make module-based GRN inference approach more generally useful to the research community.

Results: We present the RMaNI (Regulatory Module Network Inference) framework, which supports cancer subtype-specific or condition specific GRN inference and differential network analysis. It combines both transcriptomic as well as genomic data sources, and integrates heterogeneous knowledge resources and a set of complementary bioinformatic methods for automated inference of modules, their condition specific regulators and facilitates downstream network analyses and data visualization. To demonstrate its utility, we applied RMaNI to a hepatocellular microarray data containing normal and three disease conditions. We demonstrate that how RMaNI can be employed to understand the genetic architecture underlying three disease conditions. RMaNI is freely available at http://inspect.braembl.org.au/bi/inspect/rmani

Conclusion: RMaNI makes available a workflow with comprehensive set of tools that would otherwise be challenging for non-expert users to install and apply. The framework presented in this paper is flexible and can be easily extended to analyse any dataset with multiple disease conditions.


4. Results and discussion

4.1. Simulated BNps with 7 genes

We first evaluate different inference algorithms on synthetically generated random networks. We generate 1000 random BNps with n = 7 genes, maximum input degree K = 3, and perturbation probability p = 0.01. For each node, we uniformly assign 1 to K regulators. Hence the average connectivity in this set of random networks is 2. After determining the regulatory relationships among nodes, the regulatory functions for each node are determined by randomly filling in the corresponding truth tables with Bernoulli random numbers with the bias following a Beta distribution with mean 0.5 and standard deviation 0.01. For each random BNp, we simulate time series of different numbers of state transitions based on its underlying Markov chain. The number of “observed” state transitions m ranges from 10 to 60 to reflect the difficulty level of network inference. For control, we choose the first node as the marker gene and define the undesirable states as these network states with the first node down-regulated. In the binary representation of network states,

= <x|x1 = 0>. As the networks are randomly generated, without loss of generality, we allow intervention on the last node as the control gene, which we can either knock up or down to derive control policies. In our simulated random BNps, we have the original average undesirable steady state mass π org = 0.5071 with standard deviation 0.3575, with π org ≈ 0.5 because we set the bias to 0.5. When we apply the MSSA algorithm to derive the optimal stationary control policies for these random BNps, the average controlled undesirable steady state mass is π = 0.3703 with the standard deviation 0.3749.

Based on these simulated time series, we have implemented REVEAL, BIC, MDL, uMDL, and Best-Fit inference algorithms and modified accordingly to reconstruct BNps, including regulatory relationships and regulatory functions represented as general truth tables. For BIC and MDL, we set the regularization coefficients to values previously reported to have good performance in Zhao et al. (2006), λ = 0.5 for BIC and λ = 0.3 for MDL.

Table ​ Table1 1 provides the network inferential validity measurements: normalized Hamming distance μham (Hamming distance over the total number of edges in true networks), the steady-state distance μss, and the controllability distance μctrl for different network inference algorithms given different numbers of state transitions. As discussed in (Zhao et al., 2006), BIC and MDL perform similarly. Regarding the accurate recovery of regulatory relationships, it is interesting to see that Best-Fit appears to achieve the best performance with respect to μham while REVEAL does not perform very well. One explanation could be that REVEAL introduces many false positives, hopefully to best fit the data by using the functions with more regulators. This is in fact what we observe from our experiments. All the other inference algorithms choose the functions with the smallest number of regulators either by complexity regularization in BIC, MDL, and uMDL or choosing the “parsimonious” functions with the minimum prediction errors in Best-Fit. For uMDL, we note that μham improves quickly with the increasing sample size compared to other complexity regularization algorithms BIC and MDL. Based on our experiments, uMDL consistently generates very low false positive edges (close to zero), even with a very limited number of samples, which is the main advantage of the uMDL algorithms. This has also been shown in the original paper (Dougherty et al., 2008). For μss, both REVEAL and Best-Fit perform consistently better than BIC, MDL, and uMDL, since both REVEAL and Best-Fit aim to find the network models that best fit the observed state transitions. With regularization on model complexity by BIC, MDL, and uMDL, the steady-state distances are greater. As mentioned earlier, REVEAL and Best-Fit, especially REVEAL, reconstruct networks with more edges to explain the observed data, which leads to smaller μss.

Table 1

The comparison of network inference algorithms (REVEAL, BIC, MDL, uMDL, and Best-Fit) with M different number of observed state transitions.

Validityμhamμssμctrl
M103050103050103050
REVEAL0.77740.61110.65110.67430.46570.42160.10670.02750.0049
BIC0.69660.41960.33040.86790.70890.54920.07390.03000.0126
MDL0.72040.42600.32940.94140.72250.54350.07750.03110.0121
uMDL0.80000.37280.24711.19570.69730.49350.10580.03520.0093
Best-Fit0.73110.39190.29130.63780.42440.40980.10270.02500.0045

When we investigate the inferential validity with respect to controllability, μctrl, we see interesting changes of tendency between the five algorithms. Especially with very few state transitions, M = 10, BIC, MDL, and uMDL algorithms perform better than REVEAL and Best-Fit, which indicates that the regularization on model complexity with a limited number of observations helps reconstruct network models that yield better controllers. With more observations, REVEAL and Best-Fit gradually perform better than BIC, MDL, and uMDL due to introduced bias by model complexity regularization.

Figure ​ Figure2 2 plots μham, μss, and the average undesirable steady-state mass using the control policy designed on the inferred network via the MSSA algorithm. For comparison purposes, the latter average is compared to the average original undesirable mass and the average undesirable mass following application of the MMSA control policy designed on the original network. As m increases from 10 to 60, all algorithms improve. In fact, with more than 50 observed state transitions for these generated random BNps, the derived stationary control policies achieve almost the same performance compared to the optimal control policies with complete knowledge of the network models. The average performances from inferred networks are in fact within 5% for all five inference algorithms when M = 60.

Performance comparison of five network inference algorithms by different validity indices based on simulated BNps with 7 genes and K = 3. (A) Average normalized Hamming distance μham (B) μss (C) average undesirable steady-state mass π after applying derived stationary control policies based on inferred networks to the original ground truth BNps, compared to the average undesirable mass obtained by the optimal control policy (OPT) based on the complete knowledge of original BNps and the average undesirable mass before intervention (Original).

We further evaluate inference algorithms on a similar set of 1000 random BNps with n = 7 genes with the same settings but change the maximum input degree K = 5, which increases the average connectivity to 3. For this set of random BNps, we have the average undesirable original steady state mass π org /> = 0.4841 with standard deviation 0.3171. When we apply the MSSA algorithm to derive the optimal stationary control policies for these random BNps, the average controlled undesirable steady state mass is π /> = 0.2529 with the standard deviation 0.3144. The average shift of undesirable masses is higher compared to the previous set of random networks, which is expected as the network sensitivity monotonically increases with the average network connectivity (Kauffman, 1993 Shmulevich and Dougherty, 2007 Qian and Dougherty, 2009a). With higher sensitivity, networks can be more effectively controlled. We again compare the inferential validity as in the previous experiment. Figure ​ Figure3 3 shows plots analogous to Figure ​ Figure2. 2 . Especially, we note that in this set of experiments, we can achieve close-to-optimal intervention with fairly small sample size as illustrated in Figure ​ Figure3C. 3C . It is clear that the performance of different inference algorithms depends on the characteristics of the networks, especially the network sensitivity. More specifically, all three indices become worse for all the inference algorithms, illustrating that with increasing network sensitivity, the inference problem becomes more difficult. It is also clear that the performance improves at slower rates with the increasing sample size when we have higher network sensitivity. Another important difference is that for this set of random networks, both REVEAL and Best-Fit have higher μham when the number of samples increase above 40. The reason may be due to the tendency of random perturbations forcing both algorithms to bias toward more complex Boolean functions with more input variables as regulators.

Performance comparison of five network inference algorithms by different average validity indices based on BNps with 7 genes and K = 5. (A) Average normalized Hamming distance μham (B) μss (C) average undesirable steady-state mass π after applying derived stationary control policies based on inferred networks to the original ground truth BNps, compared to the average undesirable mass obtained by the optimal control policy (OPT) based on the complete knowledge of original BNps and the average undesirable mass before intervention (Original).

4.2. Simulated BNps with 9 genes

For simulations with 9 genes, owing to run time, we generate 200 BNps with n = 9 genes and perturbation probability p = 0.01. We again make uniformly random assignments of 1 to K regulators, with K = 3 so that the average connectivity is 2. The bias for the corresponding truth tables follows the same Beta distribution with mean 0.5 and stand deviation 0.01. The number of “observed” state transitions m range from 10 to 60. The derivation of control policies is still based on the definition of the undesirable states

= <x|x1 = 0> and the last node is the control gene. In the simulated random BNps, the average undesirable steady state mass is π org /> = 0.4886 with the standard deviation 0.3764. When we apply the MMSA algorithm to derive the optimal stationary control policies for these random BNps, the average controlled undesirable steady state mass is π /> = 0.3668 with the standard deviation 0.3863. Figure ​ Figure4 4 shows plots analogous to Figure ​ Figure2 2 with the trends similar as those observed in the previous experiments with corresponding random BNps with 7 genes and K = 3.

Performance comparison of five network inference algorithms by different average validity indices based on BNps with 9 genes and K = 3. (A) Average normalized Hamming distance μham (B) μss (C) average undesirable steady-state mass π after applying derived stationary control policies based on inferred networks to the original ground truth BNps, compared to the average undesirable mass obtained by the optimal control policy (OPT) based on the complete knowledge of original BNps and the average undesirable mass before intervention (Original).

In the second set of simulated random BNps with 9 genes, the settings are the same except that K = 5. In these random networks, the average undesirable steady state mass is π org /> = 0.4895 with standard deviation 0.3269. When we apply the MSSA algorithm to derive the optimal stationary control policies for these random BNps, the average controlled undesirable steady state mass is π /> = 0.2781 with standard deviation 0.3268. Figure ​ Figure5 5 is analogous to Figure ​ Figure3 3 .

Performance comparison of five network inference algorithms by different average validity indices based on BNps with 9 genes and K = 5. (A) Average normalized Hamming distance μham (B) μss (C) average undesirable steady-state mass π after applying derived stationary control policies based on inferred networks to the original ground truth BNps, compared to the average undesirable mass obtained by the optimal control policy (OPT) based on the complete knowledge of original BNps and the average undesirable mass before intervention (Original).

In summary, when we evaluate different inference procedures with respect to different inferential validity criteria, different inference procedures show different trends with their increasing sample size. Their performance overall depends on network characteristics as well as available samples. Finally, when effective intervention is our final operational objectivel, it is promising that we can achieve effective intervention based on inferred networks, even with fairly small sample size as illustrated in Figures ​ Figures2C, 2C , ​ ,3C, 3C , ​ ,4C, 4C , ​ ,5C 5C .

4.3. A metastatic melanoma network

Finally, we evaluate different inference algorithms based on a metastatic melanoma network used in previous studies on network intervention (Qian and Dougherty, 2008 Qian et al., 2009 Yousefi and Dougherty, 2013). The network has 10 genes listed in the order from the most to the least significant bit: WNT5A, PIR, S100P, RET1, MMP3, PLCG1, MART1, HADHB, SNCA, and STC2. The order does not affect our analysis. We note here that this network was derived from gene expression data (Kim et al., 2002) collected in studies of metastatic melanoma (Bittner et al., 2000 Weeraratna et al., 2002). Table ​ Table2 2 and Figure ​ Figure6 6 together illustrate the regulatory relationships among these selected 10 genes from 587 genes profiled in Bittner et al. (2000), Weeraratna et al. (2002), which were derived based on gene expression data rather than curated regulatory relationships among genes in literature. We believe that the model is appropriate for the purpose of illustrating the effectiveness of objective inferential validity on quantifying the performance of inference procedures in this work. Based on these information, we construct a BNp with the perturbation probability p = 0.01. As in the previous studies, the control objective is based on the fact that up-regulation of WNT5A is associated with increased metastasis. Thus,

= <x|x1 = 1>. For this network, the undesirable steady-state mass is π = 0.2073 in the original network, which can be reduced as illustrated in Table ​ Table3 3 with different genes as potential targets using the MSSA algorithm on the original network. Based on this model, we simulate 20, 60, and 80 state transitions and infer the network based on these time series data using all five algorithms. As the primary objective here is to reduce the undesirable steady-state mass with WNT5A up-regulated, we focus on its shift derived by the MSSA algorithm based on the inferred networks using different inference algorithms.

Table 2

Regulatory functions in the metastatic melanoma network [Modified from Table ​ Table1 1 in Yousefi and Dougherty (2013)].


Watch the video: Δίκτυα Υπολογιστών Μέρος 1ο - Εισαγωγή στα δίκτυα υπολογιστών (August 2022).