Learning to act in multiagent systems is distinctive from traditional single-agent learning, in that the optimal strategy depends on others: an agent seeks to find the best response strategy and the other agents may adapt their strategies in turn. Our goal is to tackle partially observable multiagent scenarios by proposing a framework based on learning robust best responses (i.e., skills) and Bayesian inference for opponent detection. In order to reduce long training periods, we propose to intelligently reuse policies (skills) by quickly identifying the opponent we are playing with.