The Wolfram|Alpha Community Forum has moved to the Wolfram Community. Sign up today for interesting discussions about Wolfram|Alpha and more!
Today we are officially launching Wolfram|Alpha to the world at large. It has been a very successful weekend of testing and learning. We’re flattered by the positive reception thus far, and we are dedicated to furthering the project with the help of you, our community of users.
To that end we are officially launching the Wolfram|Alpha Community, which allows you to submit questions, ideas, and favorite inputs.
We already have a few static forms to contribute things such as facts, figures, and structured data or algorithms, methods, and models. The Community serves to supplement these types of feedback with a more free-form discussion among all Wolfram|Alpha users.
In the Community, you can vote for items that you feel deserve further attention. We support threaded commenting, unique user profiles, and social sharing via email, Twitter, and Facebook. The Community also allows you to “save” items of interest so that you can track their progress over time.
This crowd-sourced model will help our team here gain a better understanding of what features, improvements, and possibilities the Community thinks are most interesting and worthwhile.
There has been a tremendous amount of useful feedback thus far, and much of that information is being used to make immediate improvements in near real time.
But it is also our hope that the Wolfram|Alpha Community will help make the feedback process more direct and have more impact. The Community will provide us with a mechanism to report back to you with changes, new results and capabilities, and overall improvements, thereby closing the loop and making the entire system more transparent.
Of course, we won’t be able to respond to every submission. But we’ll do our very best to respond to all relevant and substantive items. Additionally, it is our hope that members of the Community will likewise take the time to assist their peers, pointing them in the right direction and offering valuable advice and context.
Thanks again for all of your support and please join us in the Community!
Very cool. It will be interesting to see how this system evolves.
Please can you put the queries listed in this blog into your “TO DO” list:
http://www.alwaysthetwain.com/blogs/2009/05/18/wolfram-alpha-google-true-knowledge-the-twain-test/
The link above shows results comparing Wolfram Alpha with Google and True Knowledge for the following 10 questions:
(1.) Who discovered radium?
(2.) Where is Atlantis?
(3.) How do we make gold from lead?
(4.) Can robots dream?
(5.) What is a sprite? [This is my trick question since ‘Sprite’ is a drinks brand as well as a type of fairy.]
(6.) When did Homo Erectus become Homo Sapien?
(7.) Why are we here?
(8.) How many light bulbs are there in the world?
(9.) Who is the Vitruvian Man?
(10.) Where is Schrodinger’s cat?
Unfortunately, Wolfram Alpha often told me that it isn’t sure what to do with the input and question (2) on the location of Atlantis elicited a “slow script” message.
As you can see I decided not to ask numerical questions since it’s pretty much to be expected that WA will deal well with indices, statistical, biochemical molecule, quantum mechanics-type queries since it is based on Mathematica.
Whilst this is undoubtedly important for the veracity of facts and figures provided in academic and business sector circles, there is also the need for the NLP to associate terms like “radium” with “Marie Curie” which the engine didn’t during my test.
Thanks very much in advance for looking into providing answers to my 10 queries. Good luck with it all!
You misunderstand the point of Wolfram|Alpha I think. It is not a search engine like google nor is it a forum of expertise like answers.com.
It is a tool that allows you to find and analyse data (i.e hard facts) from the web. It can only answer questions that have a definite answer or present data related to a subject. Things like the weight of 1g of gold or the average age in Australia. Questions with no definite answer such as the location of Atlantis and do robots dream will not and should not be answered as that is what google and the like already do.
2. not a fact
3. not a fact
4. not a fact
6. not a fact
7. not a fact
8. not a fact
9. not a fact
10. not a fact
You don’t understand what Alpha is used for. If it’s not a fact, it can’t calculate it. Where is Atlantis? Mankind doesn’t even know if it existed, how the hell are is Alpha supposed to point it out? Can robots dream? It doesn’t create narratives, it gives you numbers in return. If you want to know why we are here, talk to a philosopher. If you want computable data, use Alpha. It couldn’t be clearer.
Eric, you are completely right but I believe that this fact you are stating was not emphasized enough by the Alpha team through all the buzz that happened during the launch of the site.
Some people just knew about WolframAlpha from some forum or blog or even a friend and came directly to try it as another Google!
I think with time this will be more clear but still the questions like those raised by Twain are useful to open up his kind of discussions to clarify everything for everyone.
Unfortunately, Cobalt and Eric, you’re the ones who may be misinterpreting my testing approach. Stephen Wolfram, in an interview with Semantic Universe, notes that WA should be compared with the likes of Google and Yahoo! and not with HAL or Cyc, so it was reasonable for me to run WA results against Google’s and True Knowledge’s. Also let me give some context which may help.
I have a maths degree and have worked in banking, so I understand perfectly well the difference between calculable inputs to derive proofs and business models from information which is unquantifiable or simply has no quantity — such as “How is Michelle Obama related to Barack — which are questions needing answers of a qualifiable nature.
Now, Wolfram Alpha is marketed as a “computational knowledge engine” rather than a fact+figures finder/calculator so it’s supposed to be able to derive KNOWLEDGE not facts+figures alone.
Let’s tackle what logically each of my questions should have derived:
(1.) Who discovered radium —- WA gave the year but not the who (Marie Curie). Moreover, the expectation would be that the system would generate both a visual of the radium atom, some charts of radioactivity, a picture of Marie Curie and some facts+figures on the laboratory conditions of discovery.
(2.) Where is Atlantis — WA could have generated a series of maps not only of actual locations called ‘Atlantis’ (e.g. in South Africa and the US) it should also have produced geo-thermal images from archaeological expeditions that have tried to establish where Atlantis is.
(3.) How do we make gold from lead — instead of producing a “WA isn’t sure what to do with your input” the system could at least have produced some Periodic Table definitions of gold and lead, their reactivity with other chemicals and some paragraphs on historical attempts by people to try to make gold from lead.
(4.) Can robots dream — again instead of producing “WA isn’t sure what to do with your input” the system could have listed all the works of fiction by people who have actually existed (Philip Dick / Isaac Aasimov / Stanley Kubrick) who are factually connected to this phrase. After all, WA is supposed to apply NLP to derive what we mean by the inputs.
(5.) What is a sprite — WA produced a table of nutritional breakdown of Sprite the soft drink. In fact, apart from the faerie connection which is fictional entity, sprite is also a FACTUAL term used in computer graphics and the WA system failed to pick this up.
(6.) When did Homo Erectus become Homo Sapien — again WA issued a “WA isn’t sure what to do with your input” message. It’s an established FACT from anthropology and archaeology that in the evolution of Man, Homo Erectus preceded the emergence of the Homo Sapien. WA failed to produce a timeline graph of that evolution to help pinpoint whether that happened 500,000 years ago or 50,000 years ago.
(7.) Why are we here — again WA issues a “WA isn’t sure what to do with your input”. Fair enough, the system is not sophisticated enough to infer philosophical constructs yet; we are some way from truly consciously aware machines. Nevertheless, the expectation would be that some graphics of Big Bang Theory and the formation of the planets would have been produced.
(8.) How many light bulbs are there in the world — actually, this is a FACTUAL question. There are definitely numbers available of light bulb production, US expenditure on light bulbs per annum and how many light bulbs are used in each household per annum.
(9.) Who is the Vitruvian Man — unfortunately, Eric, you may not have seen sketches of Da Vinci’s masterpiece and which actually exist and are FACT-based. Instead of WA stating it “isn’t sure what to do with your input” the system should at least have generated an image of Da Vinci’s work. It could then have made the linkage of how the Vitruvian Man image has been applied in various fields of science — as clues to atomic structure as well as a reference diagram of human anatomy in medical science.
(10.) Where is Schrodinger’s cat — WA said it “isn’t sure what to do with your input”. This was the most surprising answer of all out of the questions posed. The expectation would be that the engine would at least interpret the question as one related to quantum physics and generate calculations and proofs attributable to Erwin Schrodinger. If it was even smarter it may even have done a compare/contrast with Einstein’s equations and Hawking’s postulations.
As for whether Schrodinger’s cat is a FACT or not, there are all manner of scientific phenomena that cannot be seen or established by the naked human eye (it’s somewhere else on the electromagnetic spectrum) for which generations of scientists have extrapolated proofs, corollaries and reductive provisos.
What matters in the question relating to Schrodinger’s cat is the fact that WA did not even produce an answer which said something like, “Schrodinger’s cat was a scientific experiment conceived by Erwin Schrodinger in 1935 in response to potential limitations in the Copenhagen approach and as a commentary on the ‘quantum indeterminacy or the observer’s paradox’. Schrodinger’s equation itself is applicable in wave physics, energy calculations of chemical reactions and is derived from the Hamiltonian and Poisson functions to produce:
(?2?/?x2 ) + (8?2/h2)(E-V) ? = 0
where ? is Schrodinger’s wave equation.
X is the position of the particle.
E is energy in Joules per second.
V is the potential energy in Joules per second.
followed by various corollaries and supporting suppositions of the type similar to those printed in this UCLA paper:
http://www.math.ucla.edu/~tao/preprints/schrodinger.pdf
Even as the most basic answer, instead of “WA isn’t sure what to do with your input” the simple and FACTUAL answer would have been “In the Schrodinger’s cat hypothesis, the cat is placed inside a steel chamber” followed by some of those equations Erwin Schrodinger is famous for.
All of my questions are science-based and either already have definitive scientific proofs or are established hypothesis based on scientifically-derived means. This includes “how do we make gold from lead” and the evolution of Homo Erectus into Homo Sapien.
WA is marketed as a “computational knowledge engine” and on the basis of its NLP which can semantically derive what our questions mean. If I ask “Who discovered radium?” and the answer provided doesn’t even mention Marie Curie then there’s clearly room for improvement.
As I’ve written elsewhere, WA’s entry into the search/knowledge space is great for us all as information consumers, knowledge connectors and sense discoverers.
Of course it’s fantastic that a tool like WA is made available — not just for the scientific community — but for anyone who needs to crunch any form of numbers or needs a piece of knowledge to support, quantify, qualify and visually compliment their articles (whether that’s on the fluxing orbital paths of the planets or the score lines of the World Series for the last century or projected growth of the shrimp population in the Indian sub-continent).
Nevertheless, we have to identify and be realistic about its current limitations because only then can we as consumers have genuine “computational knowledge engines” which can connect facts+figures from different disciplines, make sense of the world around us (visible, invisible and maybe as yet undiscoverable) and perhaps find solutions to global common ills.
Please note that the comments panel doesn’t recognize the delta symbol or Schrodinger’s wave function and that isn’t a typo on my part.
Great, amazing, no question about that. But you are not looking for praises, I believe.
You need to work a bit on the connections in your databases. For example:
Q: President of Romania
A: Traian Basescu
BUT:
Q: Basescu
A: -no answer-
You have the answer. It’s in Q number 1 🙂
Other than that, this is better and more real than:
date | 21/07/1969
countries involved | United States of America
people involved | Neil Armstrong | Buzz Aldrin | Michael Collins
You make Real Computer History. Congratulations!
theovlad
Absolutely amazing tool! Keep going.
I plan to watch the live launch. I watched the live broadcast of the first manned lunar landing.
This is on a par. In fact, even the “one giant leap for mankind” quote…
Thanks.
At first i had trouble as i was asking questions that had no hard facts..once i realized exactly how this site worked it was a breeze…i used to use the back of an old excersise book to calculate different formulas…wont need that anymore..thanks wolf and good luck…
The page layout in the community threads is a bit off in Opera:
http://dump.thecybershadow.net/1009e52a8ffed734014876971ad21bb9/0000034D.png
This is great. I love the concept and I will definitely use it.
Are you guys planning to provide a API that other mashup websites can use.
This community will be very helpful to wolfram alpha team to get feedback from wolfram users for hwat they looking for.