I thought it would be nice to add some real content to the site that a few non-techies could enjoy. Christina’s recipes should be a good start. I don’t know where she gets these or how she comes up with them and I don’t care as long as she cooks with, them I am happy. So, enjoy part 1 of the Christina’s Food series.
- 1 Tbsp butter, melted
- 1 Tbsp brown sugar, packed
- 1/2 tsp sage
- 1 tsp cinnamon
- 1/4 c chopped pecans
- salt to taste
- 2 salmon fillets
- Preheat oven to 425 F
- Mix melted butter, brown sugar, sage, and cinnamon
- Add pecans to butter mixture
- Season salmon fillets with salt on both sides
- Place salmon on baking sheet
- Coat top of salmon with butter-pecan mix
- Bake in oven for 15-18 minutes or until salmon flakes easily when tested with fork
Best served with cornbread and seasoned carrots or other veggie
Having implemented a few artificial intelligence systems recently I can be pretty sure that the main challenge in designing intelligence is defining it. In more confined problem spaces such as gaming, graph fitting, and image recognition we tend to hardcode in "learning" logic. I have always tried to understand how we choose what logic we hardcode and what logic we allow the AI to learn, and the answer seems to be very problem specific. Of course, the ultimate goal would be create an AI with no hardcoded logic that could learn and reason as well or better than a human.
Artificial neural networks seems like a good start. By imitating the way human brain functions we should be able to create an AI with similar capacity. Note however, the limitations of this approach. First, we don't know the structure of the human brain. Sure we understand neurons to a degree, but it has been shown that parts of the brain are hard wired (in a way we can't easily document) for certain tasks such as language, movement, and reasoning. Is it possible build a comparable neural network then? I would argue yes, given that we could use evolutionary selection for genetic neural network topologies. The problem is training each topology "template" and seeing if it even is capable of learning. Keep in mind that it takes a human years to fully grasp language. Also, the structure of the brain has had billions of lifetime iterations over countless years to get the wiring right.
Genetic Programming is another good attempt worth mentioning. Find a simple Turing complete language and have the AI create and mutate a program until it can solve the training set. Once the AI can solve one training set completely, add slightly harder problems to the training set and have it mutate the program until it has learned that set as well. The problem come with the number of iteration required to learn a complex task. Also note that the generation of the training sets in not trivial. As with ANNs there simply isn't enough time.
Is there hope? Yes, and I believe we will see natural progression of AIs in the problem specific field. Eventually I see these problem specific AIs acting as the trainers for the generic AI. The trick will be designing an AI that can learn fast enough that the training process merits the computational resources and time invested in the training process. Imaging creating an AI that learns to play Halo by playing a series of games with the game's AI and eventually being able to beat the game's AIs and then learn by playing humans. Assuming the AI can learn the to beat the game's AIs in roughly the same "game time" as a human then our goal has been reached.