impossiblewizardry: (Default)
[personal profile] impossiblewizardry

I really like the Cover and Hart result, that you can do nearest neighbor classification (classify to the category of the most similar previous observation), and asymptotically get an error rate less than twice that of the Bayes classifier (classify to the category with the highest posterior probability).

Actually, don’t think of your dataset as observations. Think of them as samples from the posterior distribution, which you generated computationally. So you can do classification by sampling from the posterior distribution, instead of by calculating posterior probability for individual given cases.

And then, sampling from the posterior distribution seems analogous to sitting around and thinking of hypothetical situations which seem plausible to you. So this is a model of reasoning that has a place for thinking, I mean, a kind of thinking which isn’t like, counting possible worlds that remain after eliminating those inconsistent with your observations. Although it doesn’t tell you precisely how much thinking you need to do or when you need more of it, so it’s not a model of reasoning that can really guide thinking? Except, maybe, “try to think of a representative sample of plausible cases”. Rather than “think of plausible cases in which your favored presidential candidate would have a positive impact”, or something.

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

impossiblewizardry: (Default)
impossiblewizardry

November 2021

S M T W T F S
 123456
78910111213
1415161718 19 20
21222324252627
282930    

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 8th, 2026 06:26 am
Powered by Dreamwidth Studios