Metformin sensitizes TRAIL-resistant PANC-1 cells to TRAIL-induced apoptosis

hms, CHER aims to uncover predictive features that are shared across contexts, as well as features that are predictive only in certain contexts. A context can be a cancer type, tissue type, or cancer subtype. We refer to this context as the relevant subtype, or the split, that separates individuals into two groups where the predictive program of drug sensitivity can be different. CHER simultaneously achieves two goals: CHER explicitly performs sparse feature selection while optimizing performance of prediction of drug sensitivity. Whereas optimizing prediction of drug sensitivity prediction is crucial for precision medicine, sparse feature selection allows for biological interpretation of the resulting models. The latter is especially important because it may provide an understanding of drug resistance that could shed light on ways to improve drug development or combinatorial therapy. Our purchase 946128-88-7 algorithm is inspired by transfer learning theory. We increase power by sharing information between cancers and between drugs. First, we learn models from similar cancers, essentially sharing the information between cancers by assuming that they may share the same genomic features responsible for drug sensitivity. By pooling samples of similar cancers, we PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/19752732 boost power to learn predictors common to them. To learn context-specific, or cancertype specific predictors, we introduce a split variable that represents types/subtypes of cancers. This split variable conditions the predictive effects of context-specific features via interaction terms between the split variable and the predictors in the model. Note, the choice of split is part of the optimization problem. CHER learns how to separate samples into two groups, when such separation of samples increases predictive power. At this stage, CHER has learned an initial model that may contain both predictors that are shared between cancers or specific to one of them. Next, we boost CHER’s learning by transferring information between drugs. We assume that if two drugs induce a similar response, their predictive models are likely similar as well. For example, if two drugs induce highly correlated responses and we have observed gene A as a predictor for sensitivity to one drug, it is more likely gene A is also predictive for the other drug. This allows us to adjust our belief of each feature being predictive of drug sensitivity by comparing models derived for similar drugs. From the Bayesian perspective, initial models 3 / 22 Context Sensitive Modeling of Cancer Drug Sensitivity Fig 1. Overview of CHER algorithm. A. Example of a model learned by CHER, where the drug sensitivity of melanoma samples can be predicted by mutation of M and gene expression of PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/19754356 A and S, whereas in glioma, expression of gene S and B are the predictors. CHER takes advantage of pooling samples together to gain statistical power, identifying both shared and context-specific features. In cases where the relevant context is unknown, the algorithm searches for the best “split”, if any, to separate samples into two groups. Yi represents drug sensitivity of the ith sample, xi are the corresponding features of the ith sample, zit = 1 presents the ith sample is melanoma, and I is an indicator function. B. Iterative learning scheme of CHER. CHER initially learns models with uniform prior. During each iteration, CHER trains the regression models with bootstrapping, which allows the algorithm to establish the frequency of each feature being selected. T