In short, no, you cannot do this sort of semantics with NLTK. And using Wordnet will simply not work because most sentences contain words that are not in the database. The current way to approximate sentential semantics involves distributional techniques (word space models).
If you are a python programmer, scikit-learn and Gensim give you the functionality you want by means of Latent Semantic Analysis (LSA, LSI) and Latent Dirichlet Allocation (LDA). See the answers to this previous question. In Java, I would suggest you to try the excellent S-Space package.
However, most models will give you a strictly word-based representation. Combining the semantics of words into larger structures is much more difficult, unless you assume that phrases and sentences are bags-of-words (and thus, missing the difference between e.g. Mary loves Kate and Kate loves Mary.