The least absolute shrinkage and selection operator (Lasso) estimation of regression coefficients can be expressed as Bayesian posterior mode estimation of the regression coefficients under various hierarchical modeling schemes. A Bayesian hierarchical model requires hyper prior distributions. The regression coefficients are parameters of interest. The normal distribution assigned to each regression coefficient is a prior distribution. The variance parameter in the normal prior distribution is further assigned a hyper prior distribution so that the variance parameter can be estimated from the data. We developed an expectation-maximization (EM) algorithm to estimate the variance parameter of the prior distribution for each regression coefficient. Performance of the EM algorithm was evaluated through simulation study and real data analysis. We found that the Jeffreys' hyper prior for the variance component usually performs well with regard to generating the desired sparseness of the regression model. The EM algorithm can handle not only the usual regression models but it also conveniently deals with linear models in which predictors are defined as classification variables. In the context of quantitative trait loci (QTL) mapping, this new EM algorithm can estimate both genotypic values and QTL effects expressed as linear contrasts of the genotypic values.