I have been working with fuzzy logic (FL) for years and I know there are differences between FL and probability specially concerning the way FL deals with uncertainty.

However, I would like to ask what more differences exist between FL and probability?

In other words, if I deal with probabilities (fusing information, aggregating knowledge), can I do the same with FL?

Klir, and Bo Yuan's Fuzzy Sets and Fuzzy Logic: Theory and Applications (1995) provide in-depth discussions on the differences between the fuzzy and probabilistic versions of uncertainty, as well as several other types related to Evidence Theory, possibility distributions, etc.

It is chock-full of formulas for measuring fuzziness (uncertainties in measurement scales) and probabilistic uncertainty (variants of Shannon's Entropy, etc.), plus a few for aggregating across these various types of uncertainty.

There are also a few chapters on aggregating fuzzy numbers, fuzzy equations and fuzzy logic statements that you may find helpful.

I translated a lot of these formulas into code, but am still learning the ropes as far as the math goes, so I'll let Klir and Yuan do the talking.

:) I was able to pick up a used copy for $5 a few months back.

Klir also wrote a follow-up book on Uncertainty around 2004, which I have yet to read.

(My apologies if this thread is too old to respond to - I'm still learning the forum etiquette).

Edited to add: I’m not sure which of the differences between fuzzy and probabilistic uncertainty the OP was already aware of and which he needed more info on, or what types of aggregations he meant, so I’ll just provide a list of some differences I gleaned from Klir and Yuan, off the top of my head.

The gist is that yes, you can fuse fuzzy numbers, measures, etc.