GMAC “punished� me for pointing out a genuine fault

Post your questions about customer service issues, GMAT exam policies, and GMAT exam structure. Get an official answer from GMAC, the organization behind the GMAT test!
This topic has expert replies
Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am
Dear GMAC team,

I have been forced to post my grievance here as the GMAC customer support has not been replying to my emails despite multiple reminders.

When I took the first GMAT on November 23, 2013, I scored 570 due to GMAT's faulty software, which crashed multiple times in the middle of the exam. This score was 140 points less than on a GMATPrep test that I taken a day ago, as a matter of fact I have scored, on the first attempt, 660, 690, 710 and 710 on the GMATPrep tests. After many emails and calls, Pearson conducted an investigation and found nothing wrong with the software or the scoring and closed the case. On my persistence, it did reopen the case though. It started an investigation but it was an eyewash - when I called the phone support, they could point out neither the number of times the computer shut down nor the exact point in the test when the computer shut down. When I talked to the Proctor at the exam centre, he told me that the "investigation" found that the computer shut down only once (just before generating the score report); however, as mentioned many times in my emails, the computer had shut down twice in the break after the AWA and IR section. This raises questions on the "investigation".

I was offered a retest for $50. On December 11, 2013, I posted on the GMATClub forum "In normal circumstances I wouldn't have accepted the offer, the only reason I accepted the offer was because I was under time pressure (I had to take the test this month and there were no dates after Dec 24. I could not have taken a date before Dec 23 - this left me with only 1 date Dec 24, which I did not want to miss)", so the only reason I accepted a retest was that I was running out of time.

In that post, I also mentioned "I also hope that I will be marked fairly (the algorithm will not take my past score in consideration and that it will not try to prove that I am a low scorer)". This was not to happen, I was "given" the same score as the last time - 570. Only a couple of days before the GMAT exam on Dec 24, I had scored 50 in Quant and 39 in Verbal.

If I was scored fairly, I wouldn't have scored 570 again (it is highly improbable to get exactly the same score again). And this too when I have consistently scored above 650 on GMAT Prep exams.

Unfair scoring is the last thing that I expected from an institute like GMAC. As suspected, it "punished" me for pointing out a genuine fault in the GMAT software.

Now it is difficult to trust GMAC'S investigations; I want to see my tests - the first as well as the second. I also know that GMAC has the "experimental" questions to its rescue - GMAC can simply say that the questions that I got right were "experimental" questions and the ones that I got wrong were the scoring questions.

Scoring 570 once because of the "experimental" questions can be attributed to bad luck; scoring 570 twice smells rat.

Again, I want to see my GMATs taken on November 23 and December 24.

I would also advise GMAC to look into the robustness of its software (stress test it more rigorously) and the placement of experimental questions. To be honest, if the business schools were willing to consider the score of some other test, I would not have bothered wasting my time here.

Finally, don't play with the careers of others. Because of GMAC's software and attitude, I have lost a year of my career. I wouldn't like others to go through what I am going though, so I would share my experience with as many people as I can, and through as many mediums as I can.

Sincerely,
Govind Singh Patwal
Candidate ID: 100002346770

User avatar
GMAT Instructor
Posts: 613
Joined: Thu Mar 22, 2007 6:17 am
Location: madrid
Thanked: 171 times
Followed by:64 members
GMAT Score:790

by kevincanspain » Wed Jan 01, 2014 5:27 am
Hello Govind,

I think your case was handled poorly: either your case had no merit, in which case you should not have been given any discount, or the irregularities may have affected your first score, in which case you should have got a free retake. In any event, keep in mind that many people significantly underperform on exam day, especially if they took practice tests in a way that differs from test-day conditions (stress, breaks, repeated questions). I know from
experience, the algorithm does NOT choose questions based on past scores. It could be that your unfounded fears undermined your performance. Put them to rest. The sooner you put this behind you, the better for you and your future success on the GMAT!
Kevin Armstrong
GMAT Instructor
Gmatclasses
Madrid

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 3:55 am
Kevin,

Thank you for replying back.

I missed your reply as their was no notification email sent to me when you posted your comment. Some other forums usually do this - it becomes easier to track the comments/replies to posts.

Coming back to my case, I have some evidence that will prove that the case was indeed handled poorly.

Thanks again for replying.

I will post my subsequent conversation with GMAC here.

Sincerely,
Govind Singh Patwal
Last edited by gspatwal81 on Wed Jan 08, 2014 4:17 am, edited 1 time in total.

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 3:57 am
An update - GMAC replied back on Jan 1
------------------------------------------------------------------
Dear Govind,

Your testing sessions are always independent of one another: Your performance on one does not influence the questions you receive during any other testing sessions. The item selection and scoring algorithms are constant, the same for every examinee and every instance, regardless of past experience.

The experimental questions are a proportion of each section that do not count toward your score. Using your analogy, this is equivalent to training a new referee alongside the expert referees at the Grand slam finals. The judgment of the apprentice referee has no bearing on the outcome of the match, but is able to be compared to the judgments of the experts, to ensure that when the new referee is used, his calls will be trustworthy. The part of your analogy that suggests that one player is penalized while the other benefits does not apply, because the GMAT is not a heads-up match. Even so, both players played for the point, and equally paid an effort cost, regardless of whether they 'won' or 'lost' the point.

As with the algorithms, the process for experimental items is the same for every examinee and every instance. Each examinee will receive the same number of experimental items as every other examinee, without regard to current or past performance. The questions that count and the questions that do not are fixed for the administration, and the score would not be manipulated by selecting the experimental items after reviewing performance on all items.

I hope this helps clarify your concerns.

Alex from the GMAC Team
------------------------------------------------------------------

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 4:00 am
My reply to GMAC (Jan 2, 2014)
------------------------------------------------------------------
Dear Alex and the rest of the GMAC team,

Happy New Year.

Thank you for your detailed reply; at last GMAC replied.

My postings on this forum are towards the greater good of all stakeholders - the test takers, the test makers, and the B-schools that consider the GMAT score.

The reason of my giving the tennis analogy was to show how "experimental" points mean a world of difference between a fair tennis game that is dependent on tennis skills and a tennis game that is dependent on luck. Similarly the experimental questions turn GMAT (an exam meant to test the aptitude to a test taker) into an exam that tests the luck of a person.

Stating that the number of "experimental" questions is equal for all test does not take away the fact that the "experimental" questions are still there. The test takers do not get a heads-up on the experimental questions, and this is like trapping them into spending time to answer the questions that don't count.

You put forward the argument that "As with the algorithms, the process for experimental items is the same for every examinee and every instance. Each examinee will receive the same number of experimental items as every other examinee, without regard to current or past performance. The questions that count and the questions that do not are fixed for the administration, and the score would not be manipulated by selecting the experimental items after reviewing performance on all items.". A game of dice is quite similar - as with the GMAT, the numbers on the faces are the same for every player and on every instance. Each player will receive the same number of throws as every other player, without regard to current or past performance. The number of faces and the numbers on the faces are fixed, and the score would not be manipulated. Does that makes rolling the dice 37 times similar to attempting 37 quant questions? Similarly, is rolling the dice 41 times similar to attempting 41 verbal questions. And in both instances is it the aptitude that is being tested or is it the luck? And would the B-schools consider a dice player's score as a reliable indicator of his aptitude - they won't.

Admittedly, saying that GMAT is exactly similar to a game of dice would be an exaggeration - GMAT still requires aptitude. At the same, I would reiterate that the experimental questions make the GMAT and a game of dice quite similar - dependent on luck. The disappointing part is that GMAC, leave alone doing anything about it, does not admit it.

Another point to note is that of the three main stakeholders in the GMAT, the experimental questions go against two.
1. I have mentioned how they go against the test takers.
2. They also go against the B-schools who trust the GMAT score to be an indicator of a test taker's aptitude (this is what GMAT mentions as one of the reason of taking the GMAT). The truth is that the GMAT scores are not a true indicator - as pointed earlier, they are influenced by factors similar to factors in a dice game - as you cannot call a dice game score an indicator of pure aptitude, similarly GMAT score cannot be called an indicator of pure aptitude. At the moment, a B-school, assuming the GMAT score to be a perfect aptitude indicator, might screen-in a test taker who scored high and screen-out a test taker who scored less. I am not sure how many B-schools, knowing the true impact of the experimental questions on the scores, will still accept the GMAT scores.
3. As for the third stakeholder - the GMAC, undoubtedly, the GMAT is the easiest place to have the experimental questions - nothing can be easier than throwing in the experimental questions in the real test and letting the gullible test takers fall in trap. While this is easy, the question is - is it ethical? The actual GMAT is not the right place to pounce upon unsuspecting test takers to "test" questions - this is akin to considering the test takers as guinea pigs for GMACs experiments and experimental questions. I am quite sure there are other ways to test questions.

If GMAC still wants to have the experimental questions in the GMAT, then there are better ways to do it - get smarter, tweak the software so that the scores are less dependent on the test taker's luck and thus showcase the true aptitude of the test. For example, rather than having a set number of experimental questions (for example 10 in a section) - GMAC can tweak the software in a way that kind of makes sure that the number of attempted experimental questions are the same for every test taker. Admittedly, the tweak is going to be complicated but if GMAC wants to have experimental questions as well as a score reflecting the real aptitude then GMAC has to think outside the box. In its present form, the scores do not get the test takers and the B-Schools what they really want - scores uninfluenced by luck.

In the light of the above arguments, I would like to know:
1. How committed is GMAC to have the luck factor taken off the GMAT completely and make the GMAT a test of pure aptitude?
2. What is the need of experimental questions? I am quite sure that the test makers are quite competent - does GMAC doubt their competence?
3. No one exactly knows the number of the experimental questions - is it one, two, three, thirty?

Had luck or experimental questions played no role - there would not have been a drop of 8 points in Quant and an improvement of 8 in Verbal (my aptitude did not change in a month).

Why is it that even after consistently scoring around 690 on prep tests and the GMATPrep tests, I scored 570 on the real GMAT both times? - it has either to do with the faulty software or the "experimental" questions.

Finally, I would again request you to reassess my test manually and show me my tests - at this point in this, this is what transparency calls for.

Sincerely,
Govind Singh Patwal
------------------------------------------------------------------

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 4:07 am
My reply to GMAC (Jan 3, 2014)

------------------------------------------------------------------

Dear GMAC team,

I am still awaiting answers to some questions that I asked earlier.

In an email to me, GMAC wrote: The scores that are listed on your unofficial and the score that will be listed on the official score report once it becomes available 20 calendar days after testing will be accurate and will contain no errors.
I would like to know whether the statement above mean that the scores on the unofficial report and the scores on the official scores are both accurate - if yes, then what is the point of the so called "comparison of the scores at the testing center and the Pearson hub"? It points that this comparison is an eyewash - as a matter of fact, do the unofficial and official scores ever vary?

Furthermore, what did the investigation in my last test actually say about the number of times the computer shut down and the time in the test when it shut down? The phone support representative did not have the answer and the Proctor said that it happened once (which is not true) - so I would like to know the actual findings of the investigation.

Sincerely,
Govind Singh Patwal

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 4:07 am
My reply to GMAC (Jan 3, 2014)

------------------------------------------------------------------

Dear GMAC team,

I am still awaiting answers to some questions that I asked earlier.

In an email to me, GMAC wrote: The scores that are listed on your unofficial and the score that will be listed on the official score report once it becomes available 20 calendar days after testing will be accurate and will contain no errors.
I would like to know whether the statement above mean that the scores on the unofficial report and the scores on the official scores are both accurate - if yes, then what is the point of the so called "comparison of the scores at the testing center and the Pearson hub"? It points that this comparison is an eyewash - as a matter of fact, do the unofficial and official scores ever vary?

Furthermore, what did the investigation in my last test actually say about the number of times the computer shut down and the time in the test when it shut down? The phone support representative did not have the answer and the Proctor said that it happened once (which is not true) - so I would like to know the actual findings of the investigation.

Sincerely,
Govind Singh Patwal

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 4:12 am
I send another message on Jan 6

==================================================

Dear GMAC team,

I am still awaiting answers to some questions that I asked earlier.

In an email to me, GMAC wrote: The scores that are listed on your unofficial and the score that will be listed on the official score report once it becomes available 20 calendar days after testing will be accurate and will contain no errors.
I would like to know whether the statement above mean that the scores on the unofficial report and the scores on the official scores are both accurate - if yes, then what is the point of the so called "comparison of the scores at the testing center and the Pearson hub"? It points that this comparison is an eyewash - as a matter of fact, do the unofficial and official scores ever vary?

Furthermore, what did the investigation in my last test actually say about the number of times the computer shut down and the time in the test when it shut down? The phone support representative did not have the answer and the Proctor said that it happened once (which is not true) - so I would like to know the actual findings of the investigation.

Sincerely,
Govind Singh Patwal

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Wed Jan 08, 2014 4:16 am
UPDATE
As GMAC was not responding, I called the phone support yesterday (Jan 7). After the call, I wrote to GMAC.
===================================================

Dear GMAC team,

After not receiving any reply from GMAC on the forum or on email, I decided to talk with the GMAC phone support today.

I specifically asked about the investigation into my first test.

As it turned out, the investigation had several cracks. Over the time, the investigation has given different outcomes.
"¢ I took the first GMAT (the test in question) on Nov 23. During the test the computer crashed twice and on the same day, I wrote an email to GMAC "The computer crashed after the AWA and Integrated reasoning section - This happened in front of the test administrator (you can verify this with the computer logs and the video recording), he tried to get the computer back on but it crashed again (again, you can verify this with the computer logs and the video recording). It was on the second attempt that the computer started". As you can read, I clearly mentioned that the computer had crashed twice after the AWA and Integrated reasoning section
"¢ I called the GMAT customer care on Nov 26 and this was when an incident number (205361320) was created and an investigation started.
"¢ According to the phone rep with whom I had a talk today, the case was edited on Dec 2 with comments 'no issues found'
"¢ I talked to the Phone rep on Dec 10 or 11, he mentioned that the investigators found the system had shut down, but at the same time he could not let me know how many times and when in the test.
"¢ When I talked to the Proctor at the test center, he mentioned that the investigators had found that the system had shut down only once (just before printing the score report).
"¢ According to the phone rep with whom I had a talk today, the same case was again edited again on Jan 6 (Almost 1 and half months after the incident); this time it found that the PC had crashed/shut down twice (something that I have been maintaining from day 1)...the phone support still does not know the time of the crash.

So after 1 and a half months of investigation, the investigators found the real issue - this seriously puts the investigation to question. If the investigators had checked the logs properly, this was a day's work. What it involves is checking the logs with open eyes.

The shoddy investigation strengthens my stand that the scores that I received are also not my real scores. I couldn't have scored 570 twice. The tests need to be checked again. If I assume that a majority of the questions that I got right were experimental questions but it is highly improbable that something like that would happen twice - if this has really happened, then the software needs to be re-looked into.

Once bitten -twice shy. At this point in time, even if you start an investigation, frankly saying, I will be very doubtful of the investigators' intent, but I am hoping against hope.

Finally, the phone rep today also mentioned that the Unofficial and Official score reports have never varied. This puts to question the purpose of the score comparison at the Pearson hub.

Sincerely,
Govind Singh Patwal
===================================================

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Fri Jan 10, 2014 3:48 am
Update:
After my talk with the GMAT phone rep, I received an email from GMAT Customer Service on Jan 7

-----------------------------------------------------------
Dear Govind Patwal,

Thank you for contacting Pearson VUE GMAT Customer Service and GMAC with your concerns. We would like to provide a response to your two most recent concerns. It is extremely rare that the Unofficial and Official Scores received by test takers ever vary. This is because the Integrated Reasoning, Quantitative, and Verbal portions of the exams are scored immediately upon completion at the test center and then once again when they arrive at the Pearson hub. This ensures that there are no errors in the scores as the exam is graded twice. The main difference between the Unofficial Score Report and the Official Score Report is that the essay score from the Analytical Writing Assessment is included on the Official Score Report.

We would also like to confirm that the exam review of your November 23, 2013 GMAT exam confirmed that your exam was restarted two times. The first restart was at the end of the first scheduled break after the Integrated Reasoning section of the exam. The second restart occurred after you had ended your exam and the Test Administrator was signing you out of the exam.

We will be unable to provide any compensation as we have confirmed that there were no errors that affected your exam on your most recent attempt on December 24, 2013. We have also already provided you a one-time courtesy and allowed you to retest at a discounted rate for your November 23, 2013 exam, which we confirmed had valid scores regardless of the fact that you encountered two restarts. We cannot extend any further courtesies for exams that we have confirmed were completed and generated accurate and valid scores. This is our final decision and no further requests will be considered.

Thank you,
Pearson VUE GMAT Customer Service
Last edited by gspatwal81 on Fri Jan 10, 2014 3:55 am, edited 1 time in total.

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Fri Jan 10, 2014 3:54 am
My reply to GMAC today (Jan 10, 2014)
------------------------------------------------------------------
Dear GMAC team,

Thank you for your email on Jan 7 (email above)

You mentioned that "It is extremely rare that the Unofficial and Official Scores received by test takers ever vary". The phone support executive mentioned it has never varied. Why is there a disconnect here?
Your reply mentions "extremely rare", which means that there is a possibility of a variation; which points out that the scoring system is not full-proof. Please quantify "extremely rare" in terms of percentage.

You confirmed that the exam restarted twice - "The first restart was at the end of the first scheduled break after the Integrated Reasoning section of the exam. The second restart occurred after you had ended your exam and the Test Administrator was signing you out of the exam." You did not mention that the computer restarted twice after the AWA and the IR section - without a mention of this, the investigation is incomplete. Was the investigation even able to find this? Why did it take the investigation a month and a half to reach a conclusion that should have taken at the max a day? What triggered the restart? Was it a software or a hardware breakdown?

No one takes the GMAT for compensation - the bare minimum one expects is a working system.

The words "one-time courtesy and allowed you to retest " sound as if the computer shutting down was my fault. It was a software/hardware failure. GMAC's software/hardware can't handle a test at a time when the adaptive algorithm is not at work, then what is the guarantee that it can handle the test when the adaptive algorithm is at work. It might not shut down or restart but one of the outcomes of such a faulty software could be erroneous score calculation.

There are some more answered questions from my previous messages (refer to my message on 02 Jan 2014, 08:14)
1. What is the need of experimental questions? Does GMAC doubt the competence of the test makers?
2. No one exactly knows the number of the experimental questions - is it one, two, three, thirty?

Had luck or experimental questions played no role - there would not have been a drop of 8 points in Quant and an improvement of 8 in Verbal (my aptitude did not change in a month).

Why is it that even after consistently scoring around 690 on prep tests and the GMATPrep tests, I scored 570 on the real GMAT both times? - it has either to do with the faulty software or the "experimental" questions.

I would at least like to be able to review the questions and understand how my score dropped so much. This is what transparency calls for.

Finally, almost every email of yours to me had the standard sentence at the end "This is our final decision and no further requests will be considered.". More than anything, the right intent is what every test taker expects from the GMAC- accept your mistakes/shortcomings and don't just dismiss everybody's genuine requests without proper investigation.

Sincerely,
Govind Singh Patwal

User avatar
Newbie | Next Rank: 10 Posts
Posts: 6
Joined: Mon May 13, 2013 11:14 am

by Enforcer » Fri Jan 24, 2014 2:33 pm
Don't be a cry baby. My friend was scoring 630-650 in mocks and in the real gmat he scored 560. He scored 550 in his last attempt 2-3 years ago.
Did his aptitude remain the same for 3 years and yours should've changed in just a month?

Experimental questions are there because they(the GMAC) want every test taker to "waste" the optimum amount of time over them.
And the number is same for all, so maybe u can think of it as 7.

You say luck is bad? Luck is also a factor to be considered in One's success. If it wasn't for luck, I wouldn't be an Entrepreneur.
You've blamed the forum, the GMAT, the GMAC and everyone you can. (Blah! Blah! Blah! Blah!)
Maybe you should try to work on your test taking ability if you're good in say the Quant and Verbal part.

Don't give the CATs sitting on your bed or while sipping the morning tea.
Try giving in a controlled environment just like the test centre you went in 2 times.

Well, its none of my concern but you wasted my time by this post, so I thought I'd put my 2 cents in it.

All the best with your "Renaissance". ;)


Leave your kudos/thanks if you liked my post. Ciao!

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Sun Jan 26, 2014 4:13 am
@ Enforcer

I have never blamed the forum. I am rather thankful that it has provided all interested individuals a platform to express their feelings.

If your computer shuts down in middle of the test (if you plan to take the test), then I am pretty sure you will "cry" as well.

You mentioned the folowing points about the experimental questions and the "luck" factor
  • Experimental questions are there because they(the GMAC) want every test taker to "waste" the optimum amount of time over them.
    And the number is same for all, so maybe u can think of it as 7.
    Luck is also a factor to be considered in GMAT
GMAC team, it seems as if Enforcer knows you inside out - do you agree with his arguments/points?

Sincerely,
Govind Singh Patwal

User avatar
Newbie | Next Rank: 10 Posts
Posts: 6
Joined: Mon May 13, 2013 11:14 am

by Enforcer » Sun Jan 26, 2014 8:08 am
Hey Govind,

I've read that Experimental Qs are there in tests like LSAT as well.
They're there, so that they can be judged for the future tests dependent on difficulty level.
So that when u get a score of 650 and your friends gives a test after some days, he will have previously tested Qs. Not just some high school test kind of stuff at all difficulty level.

So yeah, they're necessary.

Thanks

Enfo

Junior | Next Rank: 30 Posts
Posts: 20
Joined: Tue Dec 31, 2013 2:23 am

by gspatwal81 » Sun Jan 26, 2014 9:26 am
Enfo,

In your last message you mentioned "Experimental questions are there because they(the GMAC) want every test taker to "waste" the optimum amount of time over them.". Now you are saying that "They're there, so that they can be judged for the future tests dependent on difficulty level. " - please stick to one reasoning.

Also, I have already touched upon why the argument that "experimental questions are there to test questions" does not cut ice. Seems like you either have not read the entire trail; or have not read it carefully.

A real test is not the right place to experiment - it is just an experiment for the GMAC but not for the students. As I have mentioned, there are better ways to test questions (without risking anybody's scores). GMAC might as well hire smarter question makers, do more R&D, or have a dedicated team that is expert at judging the difficulty level of questions.

Experimental questions in a REAL GMAT are akin to experimental bullets given to soldiers in a REAL war!!! ...do you think it is right to risk the lives of soldiers in a war on the pretext of "testing" the bullets? - I think it only makes sense to provide them with bullets that have already been tested.

Now if GMAC or you come to justifying, you might as well try to justify the experimental bullets.

Sincerely,
Govind Singh Patwal