Kaplan Scores Suck

Ask your questions about Kaplan products/services here
This topic has expert replies
Master | Next Rank: 500 Posts
Posts: 344
Joined: Sun Sep 28, 2008 11:00 am
Location: USA
Thanked: 6 times
Followed by:1 members

Kaplan Scores Suck

by Bidisha800 » Tue Oct 28, 2008 3:25 pm
"Our practice tests are generally designed to be as accurate as possible, so you'd likely expect to score within a similiar range on test day if you took the exam now. "

I could not disagree more. All GMAT test takers know Kaplan scores doesn't reflect anything. I think Kaplan's scoring algorithm is completely messed up.

Any improvieement is on the way ? Probably not.
I think test takers should stop buying Kalpan books until Kaplan addresses this issue.
Drill baby drill !

GMATPowerPrep Test1= 740
GMATPowerPrep Test2= 760
Kaplan Diagnostic Test= 700
Kaplan Test1=600
Kalplan Test2=670
Kalplan Test3=570

Junior | Next Rank: 30 Posts
Posts: 26
Joined: Fri Mar 28, 2008 12:09 pm
Thanked: 3 times
Followed by:3 members

by vidabella417 » Tue Nov 04, 2008 7:10 am
Dear Bidisha800,

The online CATs used in the courses are continually assessed and updated to ensure validity and accuracy, which means that practice test scores from classroom CATs are a very good indicator of your actual score level. The correlations between highest Kaplan CAT score and the actual GMAT score are extremely strong, so we are therefore confident that we have the most realistic practice tests available, based on the validity of the exams, as well as the fact that we spend more time and money on research and development to ensure strong prep materials than any other company. (We also see the strongest overall results---a Harris poll recently verified that more people get into business school with a Kaplan course than any other GMAT course.)

Kaplan has made this an equally high priority for our self-study retail guides, and we have, in fact, taken decisive measures to address to scoring issue. First, the scoring in the 5 CATs that are now available in GMAT 2009: Premier Program's Online Companion has been updated to be the same as the classroom CATs. And for our next edition of Kaplan's GMAT Premier, we will continue to provide the high-quality CATs with the corrected scoring, along with some exciting, revolutionary new online features!

We listen very closely to customer feedback, and it is an critical factor taken into consideration as we make constant improvements to our products. It's how we are able to provide the most personalized and effective prep on the market today. Your satisfaction with our products is very important to us!

Sincerely,

Monica P. Lugo
Editor - Kaplan Publishing

--

We build futures one success story at a time. Visit us at www.kaplanpublishing.com

Junior | Next Rank: 30 Posts
Posts: 16
Joined: Sat Sep 05, 2009 8:58 am
Location: London
Thanked: 6 times
GMAT Score:770

by Michael Birdsall » Mon May 03, 2010 2:26 pm
Discuss how well reasoned you find this argument. In your discussion be sure to analyze the line of reasoning and use of evidence in the argument...
I find the author's argument unconvincing, the author uses jargon to draw a conclusion that is not supported with a deep understanding of that jargon, assumes that correlation is causation, suggests a comparison without actually comparing anything, and confuses numbers with percentages.

The author states, "The online CATs used in the courses are continually assessed and updated to ensure validity and accuracy, which means that practice test scores from classroom CATs are a very good indicator of your actual score level." The practice of assessing and updating exams does not ensure, in and of itself, ensure anything. The GMAT has essentially remain unchanged for more than 10 years. As a result it is unclear how updating CATs does anything more than demonstrate that the product was substandard. Further, the term "validity" has a specific meaning in the educational psychometric testing (https://en.wikipedia.org/wiki/Test_validity). Validity can be described as a determination of whether a test serves the purpose for which it was designed. If a test were designed to make a student feel insecure about his or her success, so that a test prep company could sell a course to that student, and the test did in fact make students feel insecure, we could reason that the validity of the test was very high. The author seems to suggest that simply by being valid Kaplan tests are good indicators of GMAT scores. Without an explanation of what Kaplan tests were designed to do, the author is merely using jargon to avoid answering a question.

The author goes on to state that, "The correlations between highest Kaplan CAT score and the actual GMAT score are extremely strong, so we are therefore confident that we have the most realistic practice tests available, based on the validity of the exams, as well as the fact that we spend more time and money on research and development to ensure strong prep materials than any other company." In this statement the author uses two types of faulty logic.

First the author assumes that strong correlations demonstrate that the Kaplan CAT's are realistic. This may or may not be the case. Kaplan CATS could even test subjects not administered on the GMAT and could very likely have a high correlation with GMAT CATs. For example, it is likely that if I designed a test that tested a candidates knowledge of Calculus and Aristotle's contributions to philosophy that results on my exam would have very strong correlations with GMAT test takers. If such a test did have high a high correlation with results on the GMAT would it be a realistic practice test for the GMAT?

Second, in the statement above, the author assumes that spending money on research and development ensures "strong [sic] prep materials than any other company." The assumption is that the amount of money spent is a demonstration of the quality of a product. This type of logic is exactly the type of logic used by pets.com, the failed dot-com that came to epitomize the faulty logic of the high tech boom in the late 1990's. Pets.com assumed that spending money would create a strong brand resulting in barrier between itself and any competition. The reasoning was so successful that pets.com failed 268 days of its IPO.

In the quote above, the author states, "We also see the strongest overall results---a Harris poll recently verified that more people get into business school with a Kaplan course than any other GMAT course." The author assumes that because the greatest number of people get into business school after taking a Kaplan course that Kaplan gets the "strongest results". Since Kaplan is the biggest test prep company, it is logical that simply by being the biggest, a greater number of people will go to business school after going to Kaplan. This demonstrates nothing other than that Kaplan is big.

The author could strengthen her argument by providing the following: an explanation of what is meant by "validity", some evidence of the relationship between Kaplan tests and GMAT test, similar to the ETS makers of the GRE published results of a statistical analysis of scores from the two tests(www.ets.org/Media/Tests/GRE/pdf/GRE_Comparison_Tool.xls), not assuming the there is a connection between spending more money and quality, and providing the percentage of Kaplan test takers that get into business school in compared to other test prep companies.

Finally, from a general point of view the author should avoid such logic as used in the response above on a forum where test candidates are preparing for the GMAT. Student's critical reasoning ability is constantly being assaulted by marketing messages that employ faulty logic in attempt to persuade consumers to buy products. Since GMAT candidates are honing their critical reasoning skills, using this logic with this audience is likely to hurt the author's cause.

User avatar
GMAT Instructor
Posts: 578
Joined: Tue Aug 25, 2009 6:00 pm
Thanked: 136 times
Followed by:62 members

by KapTeacherEli » Fri May 21, 2010 10:08 am
Hi Michael,

Kaplan is the world leader in test preparation, and we are convinced our product is the best in the business. However, we're never satisfied with merely being the best, and thus continually strive to improve our product.

Back in 2008, when the original post in this thread was made, our tests were a valuable resource. They provided test-like questions under timed circumstances, allowing students to practice all of the GMAT question types and analyze a breakdown of their timing and problem areas. They also provided an useful scoring benchmark, to help measure progress. Some people did report wildly variant scores from Kaplan tests, but note (great idea to apply GMAT critical reasoning skills in analyzing this issue, by the way) that the people who reported on this forum were a self-selecting--and thus non-representative-sample. It's to be expected that a population of test-takers will include individuals with larger deviations, and those test-takers tend to be the most vocal in sharing their stories.

Part of our 40+ year tradition of teaching the GMAT is continuous improvement, and over the past two years we have made numerous changes to our CATs to make them even more test-like and accurate. The most recent revision in 2010 makes our tests the most accurate yet. We hope you'll give them a try and see for yourself.
Eli Meyer
Kaplan GMAT Teacher
Cambridge, MA
www.kaptest.com/gmat

ImageImageImage

Junior | Next Rank: 30 Posts
Posts: 16
Joined: Sat Sep 05, 2009 8:58 am
Location: London
Thanked: 6 times
GMAT Score:770

by Michael Birdsall » Fri May 21, 2010 1:05 pm
Hi Eli,

Thanks for the reply.

While I can sympathize with your concerns, all that you have done is pointed out that there is an absence of evidence.

Surely you will agree that absence of evidence is not evidence of absence. In other words, just because we (both of us) are unsure of how representative any one story is of the whole, neither of us can conclude anything. That is precisely why I posted the recommendations for how the author's argument could be improved. Kaplan certainly has the resource to produce something similar to what ETS has produced but is choosing not to. I found it very interesting that the GRE was not a more valid indicator of GMAT scores.

I admire the work of Stanley Kaplan very much. I believe he did work for continuous improvement. I, however, have my misgivings as a GMAT tutor about Kaplan's GMAT products. I posted this reply after a student of mine scored a 740 on an actual GMAT. Four days before scoring a 740, he took a Kaplan test a friend let him borrow, he scored a 610 on that test. Convincing him to take his test anyway was a hard sell. It shouldn't have been. He was well prepared.

Best,
Michael Birdsall
GMAT Tutor
Cambridge, UK