Anonymous Oct 30, 2013 at 1:08 pm

Comments

1
I know all about this. Clippy was an Intelligent User Interface for Microsoft Office that assisted users by way of an interactive animated character, which interfaced with the Office help content. It used technology initially from Microsoft Bob and later Microsoft Agent, offering advice based on Bayesian algorithms.
So get your facts straight. I think by now just about everyone has heard about the most annoying feature to have been included in a commercial piece of software. I am talking, of course, about Clippy, the Microsoft Office Assistant that many loved to hate. Clippy was first included in the 1997 release of the Office suite and continued to be part of the product line until 2007 when it was permanently removed.

Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?


Many people know Clippy as a major nuisance but few know the story behind the technology and why it sucked so much. n 1993, Microsoft researchers from the Decision Theory & Adaptive Systems Group established the Lumiere project to study and improve human computer interaction using Bayesian methods. The group wanted to create smart technologies that can observe a user interacting with a computer program and infer his/her goals and needs providing valuable feedback and assistance as necessary. Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?

Any user model that can adequately capture all the relevant information will necessarily have many variables. The values of these variables must be estimated over time. Moreover, different users tend to interact with a piece of software differently. For example, an experienced user is most likely to need less help; the same user may also help with the more obscure features of the software compared to a novice. Personalization is a very important factor in ensuring that such systems work well.

To make a long story short, the Microsoft researchers led by the senior scientist Dr. Eric Horvitz were making good progress and in 2 years time they already had a nice system working. So, in 1995 and as the team had already started collaborating with the Microsoft Office production team, they put together a demonstration of Lumiere’s inference engine for Excel. The video below is a 9-minute tour of Lumiere working in Excel. In the video, Horvitz explains how the inference engine worked in 1995 and how they envisioned it working in later versions using a cartoon character front-end. Watch the last minute of the video for a glimpse of Clippy’s grandfather.

After the video, I explain using evidence from a number of Microsoft Research publications and personal knowledge why Clippy worked so poorly in the 1997 release of Microsoft Office.
Well, after doing some research I found out what went wrong. In a paper published in 1998 at the Conference on Uncertainty in Artificial Intelligence (UAI), the Lumiere team described the inner workings of the Assistant’s inference engine and also how much of it was included in the released version of Office 97. Below is a list of the features that were excluded from the product release (those keen enough can cross reference the list with what was demoed in the video above.)

Of all the peculiar ideas that Microsoft has pursued over its almost 34 years in business, I can’t think of many that are more inexplicable than its long-standing interest in using animated characters to provide help to users of its software products–an aberration best known in the form of Clippy, the “Office Assistant” paperclip who was introduced in Office 97 and only departed the scene completely when the company released Office 2008 for the Mac a year ago. It’s hard to take Clippy, Microsoft Bob, and Windows XP’s Search Assistant doggie seriously. But a dozen years’ worth of patents relating to the basic idea shows that Microsoft takes it very seriously indeed–and I’m convinced that someone, somewhere within the company is still working away at it. Herewith, some images from those patents
So whats wrong with a paperclip with googly eyes and expressive eyebrows, was designed by Kevan J. Atteberry[9] to serve as a user-friendly troubleshooter for people using Office applications including Word and Excel. For instance, typing an address followed by “Dear” would cause Clippy to pop up with and a variety of pre-determined messages, including “Hey! It looks like you’re writing a letter!” before offering to help walk you through the process.

Developing such a technology makes sense since many people often become intimidated by complex software interfaces. I won’t bore you with the details of what Bayesian methods are and why they are good. The mathematics behind such methods is solid and has had many useful applications to date.
Again it doesnt mean it wasnt a good idea.Actually, inferring a user’s intent is a very hard problem no matter how good your math is. The Microsoft team had to infer user intent from his interaction with the program, e.g., mouse movement, what menu items were selected, context (what is the user trying to do – remember how Clippy always came up saying something like “I think you are trying to write a letter. Would you like some help?”) and specific text queries by the user, e.g., how do I print a document?
2
^^^

You didn't, by chance, cut-and-paste that, did you?
3
Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Derpity derpity doo doo doopy doopy doo derp a derp a derp deeerp blah blah blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Are you still fucking reading this bullshit? Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.

For fuck's sake get on with your life and stop reading this bullshit Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah Blah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blahBlah blah blah blah blah blah blah blabbity blabbity blah blah blabbity blah blah.
4
No, please, bore us with the details.
5
Guessin' it's kind of a slow day down at One Anonymous Plaza, huh?
6
Didn't read much, but the Wizard figure and the Dog were kinda cool. Wanted to kill them after a while, but cool.

Now, what's a Bayesian method?
7
I did not call him an asshole, I called him a fucking asshole because that's just what he is!
8
The longer I scrolled through that first comment, the more I laughed
9
@i'mrightyourwrong: Lick my clippy.
10
Theres a sexual tension going on here for a while now. Literary sexual tension.
11
Is this the reason why the O'bama Care website doesn't work, or did they simply hire the same programmer who wrote the software for the Orygun DMV and Portland Water Bureau?
12
I see that you're trying to comment on an I, Anonymous using Copy & Paste. Do you need assistance?
13
I've got a hole in my butt.
14
"To make a long story short", "I won't bore you with details"

Classic!
15
I feel so fulfilled now that I know this.
16
what did I just read?

Please wait...

Comments are closed.

Commenting on this item is available only to members of the site. You can sign in here or create an account here.


Add a comment
Preview

By posting this comment, you are agreeing to our Terms of Use.