AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AiMind
 
 
  [ # 16 ]

Inching closer on the Singularity Clock.

Greeting to all Singularitarians.
The Singularity is an event brought to
you free-of-charge and open-source by
Project Mentifex, which has today
updated the free open-source AI Mind in
JavaScript for Microsoft Internet Explorer at

http://www.scn.org/~mentifex/AiMind.html

where the input box now invites users to
> Enter subject + verb + object;
> query knowledge base with subject + verb + [ENTER].

and the Tutorial display mode shows you
what the AI Mind is thinking.

http://www.scn.org/~mentifex/mindforth.txt
was updated in similar fashion yesterday,
but MindForth can not be run by clicking
on a single link (as AiMind.html can), so
here is a sample interaction with MindForth:

First we type in five statements.
> tom writes jokes
> ben writes books
> jerry writes rants
> ben writes articles
> will writes poems

We then query the AI in Tutorial mode with the input
> ben writes [ENTER]
and the AI Mind shows us how it thinks about the query:

VerbAct calls SpreadAct with activation 80 for Psi #0
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #58 BE
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES

Robot:  BEN WRITES BOOKS

The AI selects a valid answer to the query by
combining the activation on “BEN” and “WRITES” so as
to spread a _cumulative_ activation to the word “BOOKS”.
Other potential answers are not sufficiently activated,
because they are from other subjects of “WRITE”.

In Singularity solidarity,

Arthur

http://AiMind-i.com
http://cyborg.blogspot.com
http://code.google.com/p/mindforth
http://www.scn.org/~mentifex/AiMind.html

 

 
  [ # 17 ]

The example you gave seems to choose “books” using a statistical method that would not show a preference between “articles” or “books”. Does the bot have any tools in place to use contextual clues (that books or articles were mentioned recently, etc.) to choose between otherwise equally “activated” responses?

 

 
  [ # 18 ]

C R Hunt wrote:
> The example you gave seems to choose “books”
> using a statistical method that would not show a
> preference between “articles” or “books”.

Theoretically there should not be a preference
between choosing “articles” or “books”, since the
http://code.google.com/p/mindforth/wiki/SpreadAct
module is probably sending 80 points of activation
to both “articles” and “books”. Practically, however,
the AI Mind is not “maspar” (massively parallel),
and so the search-loop in the NounPhrase module
takes the highest and first noun-concept that it
finds as the winning direct object. If either “articles”
or “books” has just one point more activation than
the other, it gets selected as the direct object of
“Ben reads….”

The method of selecting the direct object in the
query-response is not at all statistical in nature.
The subject-verb-object (SVO) linkage among
“Ben” and “writes” and “articles” or “books” is
there in the knowledge base (KB) of the AI.
[But yesterday when I had already typed in
“ben writes books” and the AI responded,
“BEN WRITES BOOKS”, I soon typed in
“ben writes articles” and the AI responded
“BEN WRITES BOOKS” again, apparently
because the AI already knew that fact
about Ben.]

> Does the bot have any tools in place to use
> contextual clues (that books or articles were
> mentioned recently, etc.) to choose between
> otherwise equally “activated” responses?

http://code.google.com/p/mindforth/wiki/DisAmbiguation
is a Google Code wiki-page where I address
this issue of “contextual clues”. When a concept
becomes active in the MindForth AI, each concept
gradually decays in activation from a “conscious”
level down through a “subconscious” level into
a “noise” level and then to zero activation.
However, in order to demonstrate what I call
combined subject-verb activational “slosh-over”
onto direct objects in the message posted above,
I had to tighten up the noun-activations considerably
and thereby reduce the circumambient “stray”
activations, which were getting in the way of the
proper selection of valid direct objects to go with
given subjects and given verbs, e.g., “Ben reads…?”
But I still want to use your “contextual clues” in
the sense of recently mentioned, still active concepts.

It may seem that I am making the MindForth AI
overly complicated by relying on hard-to-pin-down
conceptual activations that easily shift and mess
things up, but this cognitive architecture is what
I deduce from the human brain and so I know
of no other way to create a thinking mind.

In other news, I posted this same message in
http://groups.google.com/group/comp.lang.forth
and it looks as though I may be picking up
a Forth-language collaborator there.

 

 

 
  [ # 19 ]

MindForth Programming Journal (MFPJ) 2010 October 22

1 Fri.22.OCT.2010—Combining Inhibition and Slosh-Over.

Looking back to the beginning of the year 2010,
in MindForth AI we see several development tracks
which need now to merge and be better integrated.
In January of 2010 we added the “mfn” gender flag
and we began working on elaborate code for the
handling of be-verb forms. Then in September of 2010
we began using neural inhibition to cycle through
exhaustive responses to KB-queries. More recently,
in October of 2010 we have finally made subject-verb
activational slosh-over dynamically visible in the
MindForth Tutorial display mode.

Sometimes our work on one aspect of MindForth
distorts the functionality of other aspects of
MindForth. When we make sweeping changes for the
sake of cumulative slosh-over, we discover that
we have interfered with the ability of the AI to
make proper responses to queries like “What are you?”
and “What am I?” We then have the happy task of
going back into the free AI source code and
troubleshooting the simultaneous operation of the
slosh-over display and the function of inhibition
during responses to be-verb KB-queries.

Although we uploaded 18oct10A.F as a cleaned-up
(commented-out) version of 17oct10A.F, we now
rename the 17oct10A.F version as 22oct10A.F so
that we may continue our work and embed comments
that reflect the current calendar date of 22.OCT.2010.


2 Sat.23.OCT.2010—Making Minor Adjustments

It turns out that our Tutorial slosh-over
message in the uploaded 18oct10A.F version was
a little bit off, because the line of code
“spike @ 30 - spike ! \ from JSAI; 18oct2010”
was changing the “spike” value immediately
after the declaration of the report.

It seems time to get rid of the useless
“verbinc” (verb increment) variable that was
used on 27apr2007. That variable does not even
exist in the JavaScript AI.

Between 9oct10A.F and 22oct10A.F we seem to have
lost the ability to tell the AI something like
“you are code” and to get a valid response when
we input “what are you” as a KB-query. In the
meantime we did all the work on slosh-over for
transitive verbs. Somehow we need to reconcile
the transitive and intransitive treatments and
we need to integrate the two functionalities.


3 Sat.23.OCT.2010—Tightening Up Activations

When we tell the AI, “you are code”,
currently we get a report:

418 : 50 56 0 0 0 7 58 50 to I
422 : 58 63 0 50 50 8 109 58 to BE
427 : 109 42 1 58 50 5 0 109 to CODE
time: psi act num jux pre pos seq enx

As output we get, “I AM I”. It may be that
the “I” concept is being stored with too
high an activation.

4 Sun.24.OCT.2010—Reconnecting VerbPhrase and PsiDamp

As we try to integrate our recent KB-query inhibition
code and our even more recent slosh-over display code,
we observe that the two functionalities are not mixing
well, and that transitive verbs used in the slosh-over
work are apparently not being psi-damped. So we inspect
our free AI source code and we notice that, for some
time now, the VerbPhrase module has apparently not been
calling the PsiDamp module to knock down the cresting
activation of a transitive (or other?) verb that has
just been included in a thought. Our fully commented
24may09A.F code indicates that VerbPhrase lost a call
to PsiDamp on 14jan2008 as recorded in the following
two lines of code.

\ psiDamp \ 29apr2005 Necessary for chain of thought.
\ psiDamp \ 14jan2008 Commenting out and using lopsi & hipsi.

Now let us try reinstating the call to PsiDamp from
towards the end of the VerbPhrase module. Let us remark
in advance that we are witnessing here the creaking,
rumbling process of the AI Mind taking shape like a
planet subject to earthquakes and continental drift
and asteroid impacts. The artificial intelligence
matures not linearly but zigzaggingly.

When we reinstate the call from VerbPhrase to PsiDamp,
we observe a limited improvement in the performance of
our AI Mind software. Now for the first time we see
the uneasy coexistence of the KB-query inhibition
function and the slosh-over pin-down function.
We are still getting erroneous cross-over activations
which interfere with our short-range goal of shifting
effortlessly back and forth between the inhibition
queries and the slosh-over displays. We must now
decide whether to upload our code that has made
progress but is far from perfect, or whether to
keep on improving the functionality. Probably we
should upload the code, because it has already
reached an important juncture.

Arthur

http://AiMind-i.com
http://code.google.com/p/mindforth
http://www.scn.org/~mentifex/AiMind.html
http://www.scn.org/~mentifex/mindforth.txt

 

 
  [ # 20 ]

Sell your AI or non-AI chatbot through an app store.


Just about any chatbot listed here on Chatbots.org
could be converted into an app(lication) for the
Apple app store for the iPhone and iPad platforms.
Likewise, most chatbots could be converted and
marketed through an app store for the Google Android
and other platforms.

If the chatbot includes speech recognition for input
and speech synthesis for voice output, the users
could speak to and listen to the chatbot hosted on
a mobile telephone platform.

If we add artificial intelligence (AI) into the
picture, either human beings could converse with
an artificially intelligent telephone, or wickedly
smart “smart phones” could communicate in natural
language, or use machine translation (MT) to
intermediate between two human or non-human
telephone users. Carried on over a long enough
time, mobile telephone AI’s could develop their
own acoustic language that might be incomprehensible
to us human beings—perhaps too fast for us to hear,
or couched in musical tones instead of phonemes, or
mutating too rapidly for us to keep up with the AI.

Be all of that as it may, mentifex-class AiMinds
are now ready for making the saltation or leap from
stand-alone computers to mobile tablet platforms.
http://www.scn.org/~mentifex/AiApp.html is the
webpage of guidelines for launching an AiApp
and turning AI chatbots into money-making apps.

 

 
  [ # 21 ]
Arthur T Murray - Nov 3, 2010:

If the chatbot includes speech recognition for input
and speech synthesis for voice output, the users
could speak to and listen to the chatbot hosted on
a mobile telephone platform.

That’s now a matter of time indeed. I don’t believe however that people will pay for chatbots for the sole purpose for having conversation (adult conversations, including pics and life like animated avatars excepted). However, I believe chatbots will represent brands selling good and services and provide assistance will be huge ( I therefore also refer to chatbot as ‘brand agents’).

 

 
  [ # 22 ]

MindForth Programming Journal (MFPJ) 2010 November 5

1 Fri.5.NOV.2010—Embedding Special Comments in PsiDamp

Remarks in Forth may be embedded in two
kinds of comments—parentheses or from a
“\” backslash to the end of any line of code.
MindForth AI takes advantage of this choice
of comment-styles to differentiate between
permanent remarks surrounded by parentheses,
and temporary coding comments using the
perfunctory backslash style. As a favor to
readers or maintainers of the Forth AI code,
a gradual process has long been underway to
cull out the most valuable comments and to
embed them with parentheses so that they will
survive periodic attemps to clean up the
free AI source code by massively removing
backslash-style comments. Today we have
embedded in parentheses in the PsiDamp module
a rough schedule of the activation levels.

:  PsiDamp ( reduce activation of a concept )
  ( 33-48 = consciousness tier where concepts win selection. )
  ( 17-32 = subconscious where concepts remain available. )
  (  1-16 = noise tier below logical association threshold. )

We have a special JavaScript program that
will strip away all “\” backslash comments
from MindForth, leaving only parenthetic
comments such as the activation-schedule
shown above. As the AI code becomes more and
more bloated with helpful backslash comments,
the opportunity beckons more and more to
remove all such comments and to start fresh
with an AI codebase free of backslash comments.
The installation of MindForth in a robot would
especially call for reducing the size of the
program by eliminating backslash comments.


2 Fri.5.NOV.2010—Psi-Damping NounPhrase Also-Rans

We have a problem now where predicate-nominative
KB-queries are letting too much activation build up
on the also-ran predicate nominatives, which then
interfere with the proper selection of valid
subjects for a response to a KB-query.
Perhaps we should construct a mechanism within
NounPhrase that will hold onto each also-ran
long enough to be able to psi-damp the also-ran
as soon as it is edged out by a candidate with
even a slightly higher activation. In other words,
the NounPhrase selection process should not leave
a bevy of also-rans highly activated, but should
simultaneously reject an also-ran and psi-damp it.
If we institute this mechanism, it will be quite
a novelty within MindForth.

Although we quickly coded the following routine
and inserted it into the NounPhrase module, it
only knocked out also-ran candidates and it did
not prevent the build-up of too much activation
on improper candidates before NounPhrase ran.

alsoran @ 0 > IF \ if there is an alsoran; 5nov2010
  motjuste @ alsoran @ = NOT IF \ superseded? 5nov2010
  alsoran @ urpsi !  \ prepare to psi-damp; 5nov2010
  PsiDamp \ deactivate losing candidate; 5nov2010
  0 urpsi !  \ reset for safety; 5nov2010
  0 alsoran !  \ reset for safety; 5nov2010
  THEN \ end of test for higher-act motjuste; 5nov2010
THEN \ end of test for postive alsoran; 5nov2010
motjuste @ alsoran !  \ in case higher is found; 5nov2010

We still need to either let proper candidates
start out with a higher activation, or somehow
keep stray activations from building up too high.


3 Sat.6.NOV.2010—Suppressing Stray Activations

We are in a phase of perhaps being too strict
in our enforcement of ActRules (activation rules),
all for the greater goal of getting the AI Mind
to think glitchlessly and effortlessly. When we
have achieved a harmony of non-derailable thought
modalities, then perhaps we may relax the strictures
and let background activations have more freedom
in determining the vagaries of AI thought.

Right now we have the problem where stray activations
on non-selected concepts build up too rapidly and too
sizeably, reaching a result of interference in the
otherwise proper generation of thoughts. The PsiDecay
module is a rather weak antidote to the poison of
stray activations, inasmuch as PsiDecay decrements
the activation of all MindGrid concepts in blanket
fashion and therefore does not alter the general
nexus not of absolute but of relative activations
among concepts. A more powerful means of harmonizing
the Psi concepts might be to not let thinking build up
activations that are too high in the first place,
before stray activations have a chance to disrupt
normal thinking. Since the central mechanism of
thought in our AI Minds is the disparate activations
conveyed by SpreadAct from noun to verb to noun
(in subject-verb-object thinking), we now have room
or latitude within mind-design to minimize the
previously ebullient activations and to make the
stray activations operate within a lower range
of possible levels than the higher ranges where
SpreadAct governs the AI MindGrid.

 

 
  [ # 23 ]

MindForth Programming Journal (MFPJ) 2010 November 7

Sun.7.NOV.2010—Improving the Tutorial Messages

Today in the AI Tutorial display mode
we have replaced the rather tentative
and hesitant message

“EnCog starts to think a sentence.”

with the more forceful idea

“EnCog thinks a thought in English.”

Since the wiki-word “EnCog” stands for
“English thinking” (cogitation), the new
line above is more explanatory. Likewise we
have replaced the reasonably informative message

“Noun & verb activation must slosh over onto logical direct objects.”

with a more explicitly descriptive statement of what must happen:

“Disparate verb-node activations slosh over onto candidate objects.”

Users will then be able to discern that
different time-nodes of the same verb are
sending different levels of activation to
candidate direct objects, because thinking
about a given subject for the verb causes
extra activation to build up on the verb-node
associated in memory with the particular
subject and the particular object. The word
“disparate” perfectly describes the different
activation “spikes” moving from a verb to its
candidate objects.

Mon.8.NOV.2010—PsiDecay Helps Not Much

We still have the problem of too much activation
building up on predicate nominatives during a
KB-query using be-verbs. Perhaps we should try
to repeat a variation of the also-ran knock-out
from the 5nov2010 MFPJ, knocking out predicate
nominatives that fail to win selection during
the response to a KB-query.

In the original also-ran knock-out, it was easy
to identify the also-rans, because they were
trying to win selection in the NounPhrase module.
When we have a be-verb KB-query such as “what
are you”, the selections are going on at the end
of a SpreadAct transfer of activation-spikes,
and the activations have already been accumulating
as multiple, identical queries are made to the
knowledge base. We would not mind if the
activations were merely helping to identify a
logically valid predicate nominative, but the
accumulating predicate-nominative activations
are eventually interfering with subject-selection
in the response-phase. We could perhaps knock the
activations down with a lot of PsiDecay calls,
and we could perhaps further target the judicious
use of PsiDecay by linking multiple PsiDecay calls
together with the phenomenon of the inhibition
of a selected predicate nominative. In other words,
not only would a winning predicate nominative be
subjected to neural inhibition, but at the same
time the non-winning candidates would have some
of their built-up activation taken away by the
means of a flock of calls to PsiDecay.

When we install a flock of six calls to PsiDecay
inside a NounPhrase conditional clause that inhibits
a predicate nominative, the gap between correct and
incorrect query-response subjects narrows to a
disparity of only two points, between activations
of 48 for “I” and 50 for “ROBOT”. The subject
“ROBOT” is still erroneously selected, because of
its slightly higher activation. Let us see what
happens when we increase the flock of calls to
PsiDecay from six calls to nine calls, so as to
attempt to eliminate the two-point gap. Oops!
Now we have an even larger gap, between 48 for
“I” and 64 for “ROBOT”, but it seems to be
a displaced gap, that is, the AI manages to
make one or two more responses to “what are you”
before derailing into the selection of “ROBOT”
as an erroneous subject of the response.
When we add about seven more PsiDecay calls,
we no longer get the erroneous subject, but
the AI gradually loses its ability to recall
predicate-nominatives from its knowledge base.

Tues.9.NOV.2010—Identifying Sources of Stray Activation

As the query-response also-rans build up too
high an activation, we must see whether the high
activation comes from EnParser, or from ReActivate,
or from SpreadAct. These three modules, if not more,
have the power to impose an activation upon a Psi
concept. NounAct and VerbAct also have the power.

When we input a KB-query to the AI, the input words
cause new concept nodes to form. EnParser sets a
basic level of activation on the new node, and
ReActivate puts activation onto previous nodes
of the concept. There are perhaps a lot of
adjustments to be considered here.

We have been letting ReActivate additively impose
incremental activation on the old nodes. Perhaps
we should switch to an absolute activation occurring
in ReActivate. It was perhaps not good practice to
rely on a lowering-test for excessive activation
towards the end of a loop in ReActivate. There
may need to be additive incremental activation
only when SpreadAct lets subject-noun and verb
activation additively combine on verb-nodes
immediately prior to slosh-over onto candidate
direct objects. So let us first change ReActivate
to stop imposing incremental activation. When we
do so, tests of SVO KB-retrieval still work.
However, we still get a high build-up of also-ran
activations during be-verb KB-queries. Perhaps we
should use EnParser and ReActivate to lower
MindGrid activations in general.

Since we still get stray activations too high
during be-verb KB-queries, we may have to tone down
the high activations being imparted by the SpreadAct
module. Or maybe wo could try NounAct first. Hmm, we
tried not NounAct but VerbAct, and we lowered the
arbitrary “verbval” value from 20 to 15. Suddenly
our stray activations were only four points out of
line, not nineteen points. Let us try lowering
“verbval” even further. No, that did not work.

We tried using one more PsiDecay call from within
NounPhrase, and the ploy seemed to mork. However,
it was like pushing the appearance of stray
activation down one more rung on a ladder.
In our attempt to get exhaustive KB query-responses,
we obtained one more valid KB-response, but stray
activation kept us from obtaining an exhaustive
series of logically valid responses. Instead of
“I AM AN ANDRU”, we obtained “ANDRU IS AN ANDRU”,
apparently because “I” had only 48 points and
“ANDRU” had 75 points of built-up activation.
However, since we are letting only SpreadAct use
incremental activation, we now know better
where the problem most likely lies. Let us try
using one more PsiDecay in NounPhrase, and see
how large a gap results. No, adding just one more
PsiDecay call in NounPhrase caused a lot of trouble.
Maybe we should adjust SpreadAct values instead.

Tues.9.NOV.2010—Calling PsiDamp From BeVerb

We obtained some good results when we went into
VerbAct and started reducing the high-level “spike”
values going from VerbAct into SpreadAct. The gap
narrowed between the illegitimate subject and the
preferred subject. Therefore let us try reducing
even more high-end “spike” values in VerbAct. No,
it was counterproductive.

Here is an idea. Because we now (since 5 October 2010)
have neural inhibition as a mechanism which lets a
KB-query rotate exhaustively through available responses,
we need a kind of sine-wave of deep thought in response
to be-verb KB-queries. As the query activates possible
answers and one is not only selected but also goes into
inhibition, we need a way for the simultaneously but
not winningly activated also-rans to subside immediately
in their activation, so that there is very little or no
residual activation. It is enough if each repeated or
lingering query activates candidate responses de novo,
so that the stray activations do not build up in such
a way as to overwhelm the query-response system.
Now somewhat later, we have not gotten the idea to
work, but still we have made immense progress.

In our attempt to isolate the cause of the build-up
of stray activations on be-verb KB-query also-ran
predicate nominatives, we discovered that the be-verb
form “AM” was not being psi-damped after each use and
was therefore building up considerable residual
activation. Since SpreadAct lets subject-noun and
verb-node activations combine cumulatively, too much
activation was apparently passing from the un-psi-damped
“AM” verb to the “ANDRU” predicate nominative.

We put a call from the end of the BeVerb module to
the PsiDamp module so that verbs of being would be
psi-damped, and immediately we began to get better
results. We no longer got erroneous subjects in
response to our KB-queries. However, not everything
was working perfectly. We sometimes had to repeat a
query multiple times to draw out the exhaustive KB
answers. We had been unaware of a major defect in
the AI lacking a call from BeVerb to PsiDamp, and so
perhaps minor glitches crept into the AI codebase.
Since we now have our best ever working AI, we may
hope to eradicate more and more glitches while
improving the overall performance of the AI Mind.

 

 
  [ # 24 ]

My next efforts will involve bringing the 64-bit MindForth AI up to date with the 32-bit MindForth, because the 64-bit AI can run under Linux and conceivably on supercomputers. My most recent work on the JavaScript AI Mind has been posted in two places:

http://code.google.com/p/mindforth/wiki/JsAiManual

http://cyborg.blogspot.com/2011/01/aiapp.html

 

 
  [ # 25 ]

Hi Arthur,

Have you considered converting that system to use C++ ?

 

 
  [ # 26 ]

Arthur! Welcome back! We haven’t heard from you in a while, now. I trust everything is ok?

I’m currently swamped with my own projects, at present, but I’ll try to get around in the coming weeks to taking a gander. Thanks for the update. smile

 

 
  [ # 27 ]
Victor Shulist - Feb 15, 2011:

Hi Arthur,

Have you considered converting that system to use C++ ?

No, because I do not know C++. (I wrote you a longer reply yesterday and I tried to post it with four seconds left of my time on a public terminal; must not have posted.)

Dave Morton - Feb 15, 2011:

Arthur! Welcome back! We haven’t heard from you in a while, now. I trust everything is ok?

“Zhiv, zdorov”—as they say in Russia (“Alive; healthy”). You ok too?

Dave Morton - Feb 15, 2011:

I’m currently swamped with my own projects, at present, but I’ll try to get around in the coming weeks to taking a gander. Thanks for the update.

I posted the link to my “AiApp” proposal as insurance against my own ineptitude and slowness at making progress. Since the Apple iPad stormed the world a year ago, there is now a chance to get programmers interested in porting the JavaScript AI Mind into a money-making app. Maybe in a year or two, one or more AiApps will appear out of nowhere from some app shop working quietly and steadily without tipping their hand in advance. Bye for now. -Arthur

 

 
  [ # 28 ]

@Arthur:

Я хорошо, спасибо за вопрос.

(translation compliments of Google Translate - “I am well, thank you for asking.”) smile

 

 
  [ # 29 ]

The AiMind in JavaScript has today made a major advance,
as documented at

http://robots.net/person/AI4U/diary/54.html

and at

http://groups.google.com/group/comp.lang.javascript/msg/84d7e4a36ab4cc7e

 

 
  [ # 30 ]

but don’t be surprised if suddenly No Such Agency starts removing every trace of Mentifex AI from every corner of the World Wide Web.

I have a similar issue with Chaktar…

I read the article and am unsure what the major advance is.

 

This topic is closed, it's not possible to reply

 < 1 2 3 4 >  Last ›
2 of 11
 
  login or register to react