Jump to content
Urch Forums

R0jkumar

Members
  • Posts

    41
  • Joined

Everything posted by R0jkumar

  1. So in the actual exam all the questions are part of the final score? In other words the above raw score is out of 70? And how exactly do you know that more Chinese are taking the exam? And even if they are, how does it mean test takers are improving?
  2. i have a question about the raw score and how it relates to the percentile. One of the guys on this forums has posted his AGRE CS scores as follows : Score / Percent / Raw score/Date Taken 800 77% 50.....2007.04.14 830 88% 46.....2005.11.12 If the look at the sample AGRE CS test booklet, a raw score of 50 which i think is very good is equal to 95%. So, how is it that a raw score of 50 gives only 77%? What am i missing here?
  3. From the very definition of L1, a string x is in L1 if and only if there is a substring of x that is in L. Your language a^nb^n certainly has a substring that belongs to L and hence a^nb^n is a subset of L1 but not the entire set. In other words by using L1 = a^nb^n you are showing that if x belongs to L1, then x contains a substring that belongs to L but you have to show the converse as well. That is you have to show that given a string x belonging to L, if you add any string at the beginning and at the end , you should be able to get a string in L1.But thats wont be true in the case of L1=a^nb^n since if i take any string of a's and b's and add any string at the beginning and the end, i am not necessarily going to get a string of a's followed by b's such that the number of a's and b's are equal. Hope that helps.
  4. If L is (a+b)* then L1 cant be a^nb^n, since L1 = (a+b)*L1(a+b)* = (a+b)*(a+b)*(a+b)* = (a+b)* which is regular.
  5. To answer the original question, if A is known to be NP-Hard, all you need to prove is that A is in NP to prove that A is NP-Complete. gtts : Your statement below isn't really correct. "If A can be polynomially reduced to B then A is also in NP. Combining that with the fact that A is NP-hard we get that A is NP-complete." A can be polynomial time reducible to B and yet be outside of NP. Typically, you don't use polynomial time reducibility to prove the "NP" portion of NP-Completeness proof. You independently prove that a problem is NP by showing that given a solution to the problem, you can verify that the solution is correct in Polynomial time. Polynomial time reducibility is only used to prove the NP-Hard portion of the NP-Completeness proof. rajkumar
  6. Correct answer is (D) as gttts23 mentioned. The language (0 +1)* is recursive and hence contains every language of 0's and 1's . So, II holds as well.
  7. For the 1st question, if the array is 2-ordered, then the elements are such that the following hold : a[0] and a[1] In other words, we have 2 non-decreasing sequences of numbers each of length n. And 1-ordered is basically a sorted array. In a sorted array we have a[0] If you take any number from a 2-ordered list, it is already in its correct position relative to the rest of the (n-1) numbers from the same list but it maybe out of order with respect to the n-numbers from the other list. Hence each number can be a maximum of n-positions from its correct position in the sorted list. I am still thinking about the second part :)
  8. Thats not correct. You are suggesting there are 4 ways to select each of the n-2 positions which isn't correct.Remember, the string in the language is such that it begins with any number of a's followed by any string of b's and c's, followed by string ab and then followed by any number of c's. In other words you don't have a choice of picking an alphabet among a,b,c and c for each of the n-2 positions. That would be true for say a string in the language (a+b+c+d)* since for each position of a string in the language , we have the choice of making 4 choices and hence the number of possible strings would be 4^n.
  9. Got confirmed news from ETS. They don't have standby testing in India. Quite some reputation we Indians have built up over the years !!!!
  10. I agree with taro's answers. By the way, where did you get the questions? rajkumar
  11. I did talk to ETS the other day and this is what they told me. Supposedly, there has to be atleast person taking the same subject test as you at the center that you go to for standby testing to work. Like you found out, there is no point going to a test center where there is no one taking the CS exam. But i was given the impression that ETS does send some extra test booklets for standby testing provided there is atleast one person taking that particular subject test. I am going to call them today and make sure that is the case. But http://www.ets.org/portal/site/ets/m...22f951 90RCRD does mention that standby testing is not available in India which means i am screwed. I wonder why the ETS customer service guy didn't tell me that because she did check for available space in the Bangalore test center and also if any person had registered for the CS test in Bangalore. If that is the case, i have no chance of standby testing in India, right? rajkumar rajkumar
  12. I cant believe i missed the registration deadline. Has anyone here tried standby testing? What are my chances of being allowed to take the exam on exam date if i try standy testing? rajkumar
  13. Answers for (d) and (e) are 3 and 1 respectively since then the pumping lemma holds vacuously. Notice,there is no string of length 3 in {01} and there is no string of length 1 in the empty string language. rajkumar
  14. My understanding is that there is no test center in India for the November 3rd test for AGRE. Is that correct? rajkumar
  15. The objective of the transaction schedule should be such the end result is as if the 2 transactions were scheduled one after the other. That is, the end result should be that A+B+C should be unchanged no matter how the two transactions are scheduled. Now consider the 1st schedule : For both T1 and T2 the variable "B" is being modified and hence "B" is a shared resource. But in T1, B is being modified outside of the "critical" section since it is not protected by any kind of mutual exclusion lock. So if two threads were to execute T1 and T2 simultaneously, the results would be non-deterministic. Hence, the end result A+B+C would be non-deterministic Same is the case with the 3rd schedule. Only the second schedule ensures that modifications to "B" are serialized hence ensuring that the final value of B is deterministic. This is done by using a mutual exclusion lock before modifying the shared resource "B". Hence, the value of A+B+C will be unchanged no matter how the transactions are scheduled. rajkumar
  16. Well for most top schools a good score is one above 90 percentile. For example UC,Berkeley says most students who got admitted took the AGRE and had above 90 percentile scores. But having said that, there is no one criterion that the admission committee bases it decision on. There maybe cases where the AGRE scores are low but the person has very good recommendation letters along with good research experience in a particular field that might make up for a low AGRE score. rajkumar
  17. Fall(2008) admissions deadlines for most top schools start around December 15th 2007 and end around Feb-March 2008. So, if you take your AGRE exam in April 2008, you most certainly will not be able to apply for Fall 2008 for the top schools. Since you indicate that you are not adequately prepared for the exam, i would advise you to prepare for the AGRE in April 2008 and then apply for Spring 2009. There will only be a four month delay since Spring 2009 means you will be able to start school in January 2009. But, the problem is there are not many top schools that have admissions for Spring semesters. Cornell and UCLA are the ones i can think of right out of the top of my head. It is an other matter if you want to apply for schools that don't really require AGRE scores . In that case you could go ahead and apply for Fall 2008 semester. rajkumar
  18. There lies my doubt. Even though 4 bytes are written to the cache in each loop iteration, shouldn't updates to the main memory happen in blocks? In other words, since the block size is 8 bytes, shouldnt 8 bytes be updated in each loop iteration even though only 4 bytes were updated in the cache? Am i missing something here? rajkumar
  19. Array A contains 256 elements of 4 bytes each. Its first element is stored at physical address 4,096. Array B contains 512 elements of 4 bytes each. Its first element is stored at physical address 8,192. Assume that only arrays A and B can be cached in an initially empty, physically addressed, physically tagged,direct-mapped, 2K-byte cache with an 8-byte block size. The following loop is then executed. for (i = 0; i A = A + B[2*i]; 20. During the execution of the loop, how many bytes will be written to memory if the cache has a write-through policy? (A) 0 (B) 256 © 1,024 (D) 2,048 (E) 4,096 Can someone tell me how the answer for Question 20 is ©? The question i have with regards to the above is this : If say A[0](4 bytes) was updated in the cache, using write-through scheme shouldnt the entire block containing A[0] (the block size is 8 bytes) be updated in main memory even though only 4 bytes has actually changed? rajkumar
  20. a) For part (a) without going into precise analysis, here is what i would do : a*(b+c)*abc* - If i blank out the beginning a's and blank out the ending c's , i would have strings of the form (b+c)*ab. Now if a string is of this form and has length n, then it would need strings of length n-2 from the regular expression (b+c)*. And we know that the number of possible strings of length (n-2) of the form (b+c)* is 2^(n-2) since at each step of choosing the string, we can choose in 2 different ways(either the 'b' or the 'c'). So intuitively , the answer has to be Exponential (Choice E) without even considering how many more strings we can get by varying the number of a's at the beginning and the number of c's at the end. b) This part if more straighforward. If all the a’s in the string appear before all the b’s and all the b’s appear before all the c’s, then the regular expression will be of the form a*abc* which is equivalent to a+bc*. Now strings of length n of this form can be determined by simply fixing the number of a's at the beginnning or by fixing the number of the c's at the end. The number of c's can vary between 0 to (n-2), so we can choose the number of c's in (n-1) ways. For example, if i fix the number of c's to be 3 for a string of length 6, then the string has to be aabccc. So the answer is linear (Choice A). rajkumar
  21. 1) By polynomial-time algorithm , we mean polynomial in the length of the input. So for example Quick Sort which runs in O(n^2) in the worst case is a polynomial-time algo since its running time is polynomial in the length of the input(the number of numbers to sort). But 0-1 Knapsack problem is not a true polynomial-time algo since the running time is O(nW) where n is the number of the items(which is the length of the input) but W can be arbitrarily large. For example, n could be 20 but W can be as large as 2^20(in which case its exponential) or it could be 20! (in which case the running time is superexponential). Look at this article in wikipedia for more info : Pseudo-polynomial time - Wikipedia, the free encyclopedia But as far as algorithm with numbers are concerned, polynomial time algo has a slightly different meaning. If for example, there existed a O(n^2) algo to test for primality of a number n, this algo would not be considered a polynomial time algo since n is not really the length of the input but the actual number itself which could be arbitrarily large. For example n could be 2^30 and so you can imagine how big O(n^2) translates to!!! . So, in the case of algos dealing with numbers, polynomial time algo would be polynomial in the length of the input which translates to polynomial in Log(n) which is the number of bits used to represent that number on a computer. Though there did not exist a deterministic polynomial time to test for primality of a number, that changed with the AKS primality testing algo that was invented a couple of years ago : AKS primality test - Wikipedia, the free encyclopedia 2) You are right, The number of SCC's in a DAG is n since if there were 2 or vertices in a SCC, that would imply that there was a cycle in the graph(a contradiction!!). 3) Can you define what bi-connectedness of a undirected graph is?
  22. In addition to using wikipedia for Design Patterns, you may want to go through a design pattern book like the Gang of Four book : Amazon.com: Design Patterns: Elements of Reusable Object-Oriented Software (Addison-Wesley Professional Computing Series): Books: Erich Gamma,Richard Helm,Ralph Johnson,John Vlissides or a book like Patterns in Java if you know JAVA : Amazon.com: Patterns in Java, Volume 1, A Catalog of Reusable Design Patterns Illustrated with UML: Books: Mark Grand As for Software Development Models, i think they are referring to Software Development Lifecycle Models like Spiral, Waterfall, Agile Software Development Model, OO Model etc. Here are some links below : Software Development Life Cycle (SDLC), Process & Business Models Software Development Life Cycle Models - Raymond Lewallen [MVP] Spiral Model - A New Approach Towards Software Development Since OO is especially important you may want to read a book like Object Oriented Analysis and Design by Booch : Amazon.com: Object-Oriented Analysis and Design with Applications (2nd Edition) (The Benjamin/Cummings Series in Object-Oriented Software Engineering): Books: Grady Booch. I am thinking they might even go into UML diagrams(since the topic list does mention Software Development Tools) and stuff in which case going through a UML book might help. rajkumar
  23. Hi Ranga, Have you gotten your essay scores? rajkumar
×
×
  • Create New...