A fundamental task for RNA-seq data analysis is to determine whether the RNA-seq read counts for a gene or exon are significantly different across experimental conditions. Since the RNA-seq measurements are relative in nature, between-sample normalization of counts is an essential step in differential expression (DE) analysis. In most existing methods the normalization step is independent of DE analysis, which is not well justified since ideally normalization should be based on non-DE genes only. Recently, Jiang and Zhan proposed a robust statistical model for joint between-sample normalization and DE analysis from log-transformed RNA-seq data. Sample-specific normalization factors are modeled as unknown parameters in the gene-wise linear models, and the L0 penalty is introduced to induce sparsity in the regression coefficients. In their model, the experimental conditions are assumed to be categorical (e.g., 0 for control and 1 for case), and one-way analysis of variance (ANOVA) is used to identify genes that are differentially expressed between two or more conditions. In this work, Jiang and Zhan's model is generalized to accommodate continuous/numerical experimental conditions, and a linear regression model is used to detect genes for which the expression level is significantly affected by the experimental conditions. Furthermore, an efficient algorithm is developed to solve for the global solution of the resultant high-dimensional, non-convex and non-differentiable penalized least squares regression problem. Extensive simulation studies and a real RNA-seq data example show that when the proportion of DE genes is small or the numbers of up-and down-regulated genes are approximately equal the proposed method performs similarly to existing methods in terms of detection power and false positive rate. When a large proportion (e.g., > 30%) of genes are differentially expressed in an asymmetric manner, it outperforms existing methods and the performance gain is even more substantial as the sample size increases.