Counting Cards help..___

Tell us what’s happening:
Could i use a Switch Statement for this challenge?

Your code so far


var count = 0;

function cc(card) {
  // Only change code below this line
  switch(card) {
    
  }
  
  return "Change Me";
  // Only change code above this line
}

// Add/remove calls to test your function.
// Note: Only the last will display
cc(2); cc(3); cc(7); cc('K'); cc('A');

Your browser information:

User Agent is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36.

Link to the challenge:
https://learn.freecodecamp.org/javascript-algorithms-and-data-structures/basic-javascript/counting-cards

Yes. In fact I think that is the best way to solve it.

Keep in mind that some of those conditions have more than one case. Consider this example from the MDN docs:

var expr = 'Papayas';
switch (expr) {
  case 'Oranges':
    console.log('Oranges are $0.59 a pound.');
    break;
  case 'Mangoes':
  case 'Papayas':
    console.log('Mangoes and papayas are $2.79 a pound.');
    // expected output: "Mangoes and papayas are $2.79 a pound."
    break;
  default:
    console.log('Sorry, we are out of ' + expr + '.');
}

Notice that the cases for mangoes and papayas are stacked on top of each other. It will catch for either of them and won’t stop until it hits the end of the switch or a break statement. You’ll need to do something like that for all the cards that have a matching condition.

2 Likes

Using an object lookup is faster than switch.

function cc(card) {
  
    const deck = {
        2: 1,
        3: 1,
        4: 1,
        5: 1,
        6: 1,
        7: 0,
        8: 0,
        9: 0,
        10: -1,
        'J': -1,
        'Q': -1,
        'K': -1,
        'A': -1
    };
    count += deck[card];

    return count <= 0 ? count + " Hold" : count + " Bet";

}
1 Like

First of all, this is a learning forum, so please don’t just blurt out answers to curriculum questions. If you feel the need, please wrap them in [spoiler] tags so they blur and people don’t see the answer by accident.

Secondly, by what measure “an object lookup is faster than switch”. I just ran a quick rudimentary benchmark on it and for trials of running through every possible solution 500k times, I have the switch method with a slight advantage:

*** trial  1 

object method: 3839.72802734375ms
switch method: 2361.05517578125ms

*** trial  2 

object method: 3375.091064453125ms
switch method: 2089.1142578125ms

*** trial  3 

object method: 3385.7080078125ms
switch method: 2160.1611328125ms

*** trial  4 

object method: 3639.544921875ms
switch method: 2224.508056640625ms

*** trial  5 

object method: 3715.157958984375ms
switch method: 2274.09423828125ms

Not that speed optimization is the most important thing for an algorithm like this. This isn’t going to have run a trillion times a day. And if it were, JS wouldn’t be the best language. Sometimes things like readability and sustainability are more important. Personally, I find the switch method easier to read, but that comes down to personal preference.

Object lookup has a time complexity of O(1).
Switch has a time complexity of O(n).

It’s basic algorithmic knowledge.

Time complexity is measured by the time it takes to solve a problem algorithmically using the worse case scenario. All code should aim for readability and quickness.

I’m not sure what tests you used, but these tests are all over the internet.
https://jsperf.com/if-switch-lookup-table/10

Plus I’m a developer for telecommunications company and we work with large sets of data. Millions of records. A user doesn’t want to wait seconds for results.

As a side note, Javascript can easily handle large amounts of data.

Let’s example readability shall we?

Your example:


var expr = 'Papayas';
switch (expr) {
  case 'Oranges':
    console.log('Oranges are $0.59 a pound.');
    break;
  case 'Mangoes':
  case 'Papayas':
    console.log('Mangoes and papayas are $2.79 a pound.');
    // expected output: "Mangoes and papayas are $2.79 a pound."
    break;
  default:
    console.log('Sorry, we are out of ' + expr + '.');
}

vs object lookup method

var expr = 'Papayas';
const fruit = {

     "Oranges":    "Oranges are $0.59 a pound",
     "Mangos":     "Mangoes and papayas are $2.79 a pound.",
     "Papayas":    "Mangoes and papayas are $2.79 a pound."

}[expr] || `Sorry, we are out of ${expr}.`;

console.log(fruit);

Much cleaner and faster – no breaks needed.

Notice that the cases for mangoes and papayas are stacked on top of each other.

I always think it’s comical when people have paid so little attention to what I’ve written so as to almost say exactly what I’ve written and act like they’re informing me of something. Yes, I know that. That is the point. I find that more readable and closer the the “story” of the algorithm. Someone who looks at that for the first time can see exactly what is going on. True, it wouldn’t take that long in yours, but (imho) it would take a little longer and when you are working on a program with 100k lines of code, every little thing you can make instantly clear is an improvement.

I’m not going to argue with you about readability, which is very subjective. I prefer the switch, not by much but enough. It’s OK if you disagree.

Your test metrics aren’t as impressive as you seem to think. Except for a few outliers, most are pretty even. Those are different than mine, which you can try here. True, a codepen test isn’t as scientific. And on my laptop at home I’m getting different results.

That’s part of the problem - that JS is not that standardized in how it’s implemented so different browsers may implement differently.

I’m having a hard time confirming that object lookup is O(1). Clearly a true fixed-length array lookup is because you can easily calculate the address - I remember that from Assembly. But JS has some weird things in how it implements things. I assume that the object (as best case) would use some kind of a hash table that is O(n). I can’t imagine any case where an object could O(1) know an address - even with a hash table there has to be some allowance for hash collisions, which brings us back to O(n). As a caveat, I am certainly no expert on that, but just trying to visualize this in my head, I can see now way in which and object get can have the same instantaneous “instant” lookup that a fixed-length array does. In fact, I’m not convinced that a JS array lookup is a true lookup of O(1). That’s what they teach you in CS class because that’s what it is for languages with “true” arrays like Assembly and C, but JS Arrays are really objects. When people talk about algorithm complexity, they tend to talk language agnostically, which usually means that they’re thinking about “pure” data structures like you would have in C. JS uses heavy objects for data structures and the implementation is different across different engines/browsers so it is impossible to talk with precision about exactly what the complexity is for a lot of things that are happening internally in JS.

If you have an authoritative source, please let me know. I’ve looked and I find a lot of mixed opinions without much support, seeming to just assume it because it was what they were taught. When I look at things that actually have some source material and try to build a well founded argument, things start to lean towards O(n), but I haven’t found anything definitive. (Again, I’m not sure if that’s possible with JS.)

But, again, it’s silly to argue about complexity on this, since (even by your own benchmarks, ignoring the outliers) they are essentially equivalent. And this is not an algorithm that needs optimizing anyway. And I still prefer the switch as it better fits the “story” of the algorithm, imho. But that is an opinion, of course. I think coders sometimes get obsessed with the “cult of efficiency” or the the “cult of concision”. Sometimes efficiency is an important thing. Sometimes it’s pursuit can cause more problems than it solves.

It’s really simple if you think about it.

Switch statements fall through each test case until it evaluates to be true., meaning it could (and usually does use the default case). Which means it goes through n number of case statements. That would be O(n).

For key lookup,. depending on the hashing algorithm for exactness, all hashmap lookups are evaluated as O(1). When using a javascript object lookup, the keys are found significantly faster than going through each and every key. It’s estimated to be O(1).

Perfect example… say your hashing algorithm was modulas 10.
And say you had 100 keys.

With a switch statement, and with a key of 100, you would have to go through 100 evaluations to find a match.

But with hashmaps or object lookups:

All numbers such as 10, 20, 30, 40, 50 ,60, 70, 80, 90, 100 would all be in the same bucket.

11, 22, 33, 44, 55, 66, 77, 88, 99 would all be in a 2nd bucket…
12, 24, 36, 48, 60, 72, 84, 96 would all be in the 3rd bucket…
and the pattern goes on…

Whenever a key needed to be found,. it would apply modulas 10 to its value and know which bucket to search for the matching key. It wouldn’t matter whatever that number was… it would eliminate 90/100 numbers after one evaluation.

This was just a simple example of a basic hashing algorithm. Object lookup algorithms are far better than the one I explained and work recursively to find their key extremely fast.

I could easily find links and post it, but if you think about it logically you seem smart enough to figure it out.

For key lookup,. depending on the hashing algorithm for exactness, all hashmap lookups are evaluated as O(1).

You keep working off this assumption. This assumes a perfect hash function that has been optimized for the exact size of the table and key entries. I don’t see how JS could possibly do this for objects constructed and modified on the fly. Just saying something is “really simple” is not a cogent argument.

Your example requires you to perfectly fit the hashing function to the data being used. Even the example you give is not O(1). The fact that you mention recursion through a two dimensional data structure is a further nail in the coffin.

Yes, if you have a hash function perfectly matched to a predetermined set of data (an impossible assumption letting JS do it for you, on the fly, at runtime), your best time would be O(1). But the worst case is still O(n), which is what time complexity is supposed to measure. I’ll grant you that it will probably on average perform better than many O(n), but that is not usually what we concern ourselves with when we talk about algorithm analysis - we talk about worst cases. At least that was how I was taught. Yes, finding the bucket is O(1) and you seem to think it stops there. To quote Cornell University:

Hash tables and amortized analysis

We’ve seen various implementations of functional sets. First we had simple lists, which had O ( n ) access time. Then we saw how to implement sets as balanced binary search trees with O (lg n ) access time. Our current best results are this:

linked list, no duplicates balanced binary trees
add (insert) O ( n ) O (lg n )
delete (remove) O ( n ) O (lg n )
member (contains) O ( n ) O (lg n )

What if we could do even better? It turns out that we can implement mutable sets and maps more efficiently than the immutable (functional) sets and maps we’ve been looking at so far. In fact, we can turn an O ( n ) functional set implementation into an O (1) mutable set implementation, using hash tables . The idea is to exploit the power of arrays to update a random element in O (1) time.

We store each element of the mutable set in a simple functional set whose expected size is a small constant. Because the functional sets are small, linked lists without duplicates work fine. Instead of having just one functional set, we’ll use a lot of them. In fact, for a mutable set containing n elements, we’ll spread out its elements among O ( n ) smaller functional sets. If we spread the elements around evenly, each of the functional sets will contain O (1) elements and accesses to it will have O (1) performance!

hash table
add (insert) O (1)
delete (remove) O (1)
member (contains) O (1)

This data structure (the hash table) is a big array of O ( n ) elements, called buckets . Each bucket is a functional (immutable) set containing O (1) elements, and the elements of the set as a whole are partitioned among all the buckets. (Properly speaking, what we are talking about here is open hashing , in which a single array element can store any number of elements.)

There is one key piece missing: in which bucket should a set element be stored? We provide a hash function h ( e ) that given a set element e returns the index of a bucket that element should be stored into. The hash table works well if each element is equally and independently likely to be hashed into any particular bucket; this condition is the simple uniform hashing assumption . Suppose we have n elements in the set and the bucket array is length m . Then we expect α = n / m elements per bucket. The quantity α is called the load factor of the hash table. If the set implementation used for the buckets has linear performance, then we expect to take O(1+α) time to do add , remove , and member . To make hash tables work well, we ensure that the load factor α never exceeds some constant αmax, so all operations are O(1) on average .

The worst-case performance of a hash table is the same as the underlying bucket data structure, (O( n ) in the case of a linked list), because in the worst case all of the elements hash to the same bucket. If the hash function is chosen well, this will be extremely unlikely, so it’s not worth using a more efficient bucket data structure. But if we want O(lg n ) worst-case performance from our hash tables, we can use a balanced binary tree for each bucket. [emphasis original]

Be sure to get to the last paragraph where it is saying exactly what I’ve been saying. You are glossing over your assumption of a balanced hash table with a perfect hashing function.

Even if I forgot what I was taught and assumed that complexity analysis is about best case, I still would have to make big unfounded assumptions about the efficiency of JS. JS is not an efficient language. It wasn’t designed for that. It has many strengths, but fast, efficient computation and data access is not one of them. All complexity analysis assumes the language being used is optimized for mathematical and data structure efficiency. JS is not. And the implementation of these things is not standardized across engines/browsers so the only way I could make a best case argument would be to pull a bunch of assumptions out of my butt.

Yes, I agree that if you personally pick the hashing function that is perfectly matched to your predefined data, you can pretty dependably get pretty close to O(1). Good luck finding those conditions in the real world. And if it is that important, then JS is possibly one of the worst languages to choose.

But again, it is asinine to imply that it matters at all in this specific case. This is a function that will run a few times with pauses, in a language that is so slow and inefficient anyway.

Do you have a more concrete argument than contradicting yourself, misrepresenting computer science concepts, and “because I say so”?

Look you can argue with me all you want, but the fact doesn’t change that switch statements are inherently slower than hashmaps (object lookups).

You don’t have to take my word for it. Read for yourself:

Wow, just wow.

Thanks for:

  1. Not admitting that you’d just been caught in a couple of factual mistakes, essentially moving the goal post as you go.
  2. Posting a blog article (a blog article! one with spelling mistakes no less! perfect! go blog-o-sphere!) as if that trumps material from one of the most prestigious universities in the world. It also is the logical fallacy of argumentum ab auctoritate. You didn’t really defend your assertions or argue against mine - you just googled an article (one you didn’t read very closely) and tried to use it like a magic wand. Yes, we know you can google and post the first article you find that seems to agree with you after skimming a few paragraphs. People are really good at that nowadays.
  3. Not getting that the article doesn’t really explain the whys but just repeats what everyone assumes in a generalized, idealized form. Just repeating assumptions isn’t an argument. These are all assumptions that I have discussed in one form or another and you repeat them as if seemingly if you repeat them enough times everyone will nod their heads in agreement - or as if you aren’t reading closely enough to understand.
  4. Not reading your own cited article closely enough to see those asterisks next to the idealized run times and realizing what they mean.
  5. Not reading your own cited article closely enough to read it talking about “perfect” and “ideal” hash functions (the point I and both articles make and you keep ignoring) and not realizing that that is a huge assumption and something over which you have no control in JS. The simple fact is that no generic, non-idealized hash can ever be expected to operate at O(1) for all (or even the vast majority) of situations. Period. If you think it can, then you don’t understand hashmaps or hashing functions or both.
  6. Not reading and understanding lines like “Collision in hashmaps is unavoidable when using an array-like underlying data structure. So one way to deal with collsions [sic] is to store multiple values in the same bucket. When we try to access the key’s value and found various values we iterate over the values O(n). [emphasis original].” (True, it’s followed by a sentence that seems to ameliorate that, but is again based on idealism only reached with an idealized implementation and/or as n approaches infinity.)
  7. Not understanding that this (like all complexity talk) is based on an idealized CS implementation, a transparent one that you construct specifically for that implementation, not the generic, all-purpose, opaque and static hash function, whichever one is being implemented by whichever JS engine is being used by whatever browser, which, like all JS, is not optimized for calculations and data structures - the opposite of the assumptions in big-O notation talk.
  8. Not understanding (it bears repeating) that JS is not a light, quick, and efficient language like C or Assembly. It is big and heavy and slow. It is not a scalpel but Swiss Army knife with 250 attachments and the optional cappuccino maker. You can make all the idealized generalizations you want about the scalpel (and the idealized versions of all the tools) but those cannot be transferred to the Swiss Army knife implementation without big caveats - which you keep glossing over.
  9. Concluding with a statement that (without its implied hyperbole and assumption of ideal conditions) is basically something I’ve already (with documented qualifications) said is generally true - acting as if that is some kind of a triumph!
  10. Not realizing that in modern computing, with certain applications (like this here, imho) efficiency is sometimes the least important thing to consider.

Bonus:

  1. Failing to acknowledge (or not realizing) that in your own benchmarks (ignoring some obvious outliers) that the data doesn’t back up your assumptions. In fact, in some implementations your solution does slightly worse. Again, what I said - it’s not a big difference and will depend of the opaque implementation.

But thanks for playing!

Lol., I’m not about to argue with a two year freelancing programmer with a degree in music who sites himself as a senior mobile developer. Yeah I read your LinkedIn.

The whole discussion was about readability & speed of an object lookup vs switch statement.

If by my posts and link (which you asked for) doesn’t help you see the obvious differences then I feel sorry for all the new programmers on this site who take advice from you.

One year ago you were job searching & didn’t even know what to do with programming & now you suddenly know everything… lol.

Guess you forgot., let me remind you: Where to go next?

And I quote you:

“Yeah, that was another thing that I considered. I had a lot of that stuff when I studies C, decades ago. I’ve thought about going through and building all those old data structures and algorithms in JS. I’m not sure how applicable they are to modern coding, but they do appear to be a touchstone for some interviewers.”

It is not nice to try to discredit an argument trying to discredit the person… you are falling down the slope of logical fallacies

It was an interesting debate, till it was about programming.
That’s the reason a community is a wonderful thing, people can debate on things, on what work best, the best conventions… and everyone that read can learn new things.

I hope you would like to keep debating on it without resorting to name calling, like that is not interesting anymore :slight_smile:

There’s no name calling. I’m merely commenting on his experience in JS citing his own posts.

The original argument was about readability & speed of an object lookup vs switch statement.

I presented both in terms of readability. Clearly object lookup is easier to read, he still disagreed.
I went on to describe hash lookups are O(1) and switch statements are O(n).

So instead of he owning up to which is clearly faster, he changes the discussion about what is “truly” O(1). Even with average case O(1) it’s still faster than switch statements. Sure,. some browsers have implemented some efficiency tweaks to make switch statements work faster. Nobody truly knows the internals of Javascript and how they’re implemented., but it is common knowledge that hashmaps are the fastest way of searching any element. They’re always given the time complexity of O(1). Sure,. there’s index collision and other small details which could make a hashmap O(n), but average case and everyone identifies hashmap as O(1).

So if you sum it up…
switch statement average time complexity is O(n) and hashmap average time complexity is O(1). He can try to split hairs on what exactly O(1) means but it still doesn’t change the fact that object lookups are faster and cleaner.

As a side note, there’s methods called chaining which eliminates the possibility of index collision making it a closer to a perfect O(1) and worse case O(log n)., but I’m really interested in discussing the internals of exactness of what is O(1). The discussion is about which is faster & easier to read.

Even if you implemented binary search O(log n) to find a key, it’s still faster than a switch statement O(n).

Your own source says that without an ideal hashing function, its worst case is O(n), which is what time complexity is supposed to measure. Q.E.D. You haven’t provided any evidence for anything you’ve claimed (other than things you clearly haven’t read or understood and contradict what you are saying). This isn’t a debate. This is someone restating their opinion without any cogent argument.

As a side note, there’s methods called chaining which eliminates the possibility of index collision making it a closer to a perfect O(1) and worse case O(log n).

OK, now I’m starting to get pissed. I’m not talking about the perfect implementation of a hash table. I’m talking about how JS searches objects. When you show me how you are overriding the JS hash functionality to apply your perfect solution to searching an object, this will be relevant. Until then this is just trying to change the subject.

function switchFunction(){

    const expr = 'Papayas';

    switch (expr) {
      case 'Oranges':
        fruit = 'Oranges are $0.59 a pound.';
        break;
      case 'Mangoes':
      case 'Papayas':
        fruit = 'Mangoes and papayas are $2.79 a pound.';
        break;
      default:
        fruit = 'Sorry, we are out of ' + expr + '.';
    }

    return fruit;
}

function ifFunction(){

    const expr = 'Papayas';
    
    if (expr === 'Oranges'){
        fruit = 'Oranges are $0.59 a pound.';
    } else if (expr === 'Mangoes'){
        fruit = 'Mangoes and papayas are $2.79 a pound.';
    } else if (expr === 'Papayas'){
        fruit = 'Mangoes and papayas are $2.79 a pound.';
    } else {
        fruit = 'Sorry, we are out of ' + expr + '.';
    }

    return fruit;
}

function lookupFunction(){

    const expr = 'Papayas';
    const fruit = {

     "Oranges":    'Oranges are $0.59 a pound',
     "Mangos":     'Mangoes and papayas are $2.79 a pound.',
     "Papayas":    'Mangoes and papayas are $2.79 a pound.'

    }[expr] || 'Sorry, we are out of ' + expr + '.';

    return fruit;
}

switchFunction();
ifFunction();
lookupFunction();

Bytecode for all three:

[generated bytecode for function: switchFunction]
Parameter count 1
Frame size 24
   85 E> 0x1fd4532bbeca @    0 : a1                StackCheck 
  107 S> 0x1fd4532bbecb @    1 : 12 00             LdaConstant [0]
         0x1fd4532bbecd @    3 : 26 fb             Star r0
  123 S> 0x1fd4532bbecf @    5 : 12 01             LdaConstant [1]
         0x1fd4532bbed1 @    7 : 65 fb 00          TestEqualStrict r0, [0]
         0x1fd4532bbed4 @   10 : 27 fb fa          Mov r0, r1
         0x1fd4532bbed7 @   13 : 94 12             JumpIfTrue [18] (0x1fd4532bbee9 @ 31)
         0x1fd4532bbed9 @   15 : 12 02             LdaConstant [2]
         0x1fd4532bbedb @   17 : 65 fa 00          TestEqualStrict r1, [0]
         0x1fd4532bbede @   20 : 94 12             JumpIfTrue [18] (0x1fd4532bbef0 @ 38)
         0x1fd4532bbee0 @   22 : 12 00             LdaConstant [0]
         0x1fd4532bbee2 @   24 : 65 fa 00          TestEqualStrict r1, [0]
         0x1fd4532bbee5 @   27 : 94 0b             JumpIfTrue [11] (0x1fd4532bbef0 @ 38)
         0x1fd4532bbee7 @   29 : 87 10             Jump [16] (0x1fd4532bbef7 @ 45)
  169 S> 0x1fd4532bbee9 @   31 : 12 03             LdaConstant [3]
  175 E> 0x1fd4532bbeeb @   33 : 15 04 01          StaGlobal [4], [1]
  215 S> 0x1fd4532bbeee @   36 : 87 1c             Jump [28] (0x1fd4532bbf0a @ 64)
  274 S> 0x1fd4532bbef0 @   38 : 12 05             LdaConstant [5]
  280 E> 0x1fd4532bbef2 @   40 : 15 04 01          StaGlobal [4], [1]
  332 S> 0x1fd4532bbef5 @   43 : 87 15             Jump [21] (0x1fd4532bbf0a @ 64)
  362 S> 0x1fd4532bbef7 @   45 : 12 06             LdaConstant [6]
         0x1fd4532bbef9 @   47 : 26 f9             Star r2
         0x1fd4532bbefb @   49 : 25 fb             Ldar r0
  394 E> 0x1fd4532bbefd @   51 : 32 f9 03          Add r2, [3]
         0x1fd4532bbf00 @   54 : 26 f9             Star r2
         0x1fd4532bbf02 @   56 : 12 07             LdaConstant [7]
  401 E> 0x1fd4532bbf04 @   58 : 32 f9 04          Add r2, [4]
  368 E> 0x1fd4532bbf07 @   61 : 15 04 01          StaGlobal [4], [1]
  419 S> 0x1fd4532bbf0a @   64 : 13 04 05          LdaGlobal [4], [5]
  432 S> 0x1fd4532bbf0d @   67 : a5                Return
[generated bytecode for function: switchFunction]
Parameter count 1
Frame size 24
   85 E> 0xa4760bbeca @    0 : a1                StackCheck 
  107 S> 0xa4760bbecb @    1 : 12 00             LdaConstant [0]
         0xa4760bbecd @    3 : 26 fb             Star r0
  123 S> 0xa4760bbecf @    5 : 12 01             LdaConstant [1]
         0xa4760bbed1 @    7 : 65 fb 00          TestEqualStrict r0, [0]
         0xa4760bbed4 @   10 : 27 fb fa          Mov r0, r1
         0xa4760bbed7 @   13 : 94 12             JumpIfTrue [18] (0xa4760bbee9 @ 31)
         0xa4760bbed9 @   15 : 12 02             LdaConstant [2]
         0xa4760bbedb @   17 : 65 fa 00          TestEqualStrict r1, [0]
         0xa4760bbede @   20 : 94 12             JumpIfTrue [18] (0xa4760bbef0 @ 38)
         0xa4760bbee0 @   22 : 12 00             LdaConstant [0]
         0xa4760bbee2 @   24 : 65 fa 00          TestEqualStrict r1, [0]
         0xa4760bbee5 @   27 : 94 0b             JumpIfTrue [11] (0xa4760bbef0 @ 38)
         0xa4760bbee7 @   29 : 87 10             Jump [16] (0xa4760bbef7 @ 45)
  169 S> 0xa4760bbee9 @   31 : 12 03             LdaConstant [3]
  175 E> 0xa4760bbeeb @   33 : 15 04 01          StaGlobal [4], [1]
  215 S> 0xa4760bbeee @   36 : 87 1c             Jump [28] (0xa4760bbf0a @ 64)
  274 S> 0xa4760bbef0 @   38 : 12 05             LdaConstant [5]
  280 E> 0xa4760bbef2 @   40 : 15 04 01          StaGlobal [4], [1]
  332 S> 0xa4760bbef5 @   43 : 87 15             Jump [21] (0xa4760bbf0a @ 64)
  362 S> 0xa4760bbef7 @   45 : 12 06             LdaConstant [6]
         0xa4760bbef9 @   47 : 26 f9             Star r2
         0xa4760bbefb @   49 : 25 fb             Ldar r0
  394 E> 0xa4760bbefd @   51 : 32 f9 03          Add r2, [3]
         0xa4760bbf00 @   54 : 26 f9             Star r2
         0xa4760bbf02 @   56 : 12 07             LdaConstant [7]
  401 E> 0xa4760bbf04 @   58 : 32 f9 04          Add r2, [4]
  368 E> 0xa4760bbf07 @   61 : 15 04 01          StaGlobal [4], [1]
  419 S> 0xa4760bbf0a @   64 : 13 04 05          LdaGlobal [4], [5]
  432 S> 0xa4760bbf0d @   67 : a5                Return
[generated bytecode for function: lookupFunction]
Parameter count 1
Frame size 24
  874 E> 0x189603a3c3b2 @    0 : a1                StackCheck 
  896 S> 0x189603a3c3b3 @    1 : 12 00             LdaConstant [0]
         0x189603a3c3b5 @    3 : 26 fb             Star r0
  925 S> 0x189603a3c3b7 @    5 : 79 01 00 29 f9    CreateObjectLiteral [1], [0], #41, r2
         0x189603a3c3bc @   10 : 25 fb             Ldar r0
 1103 E> 0x189603a3c3be @   12 : 29 f9 01          LdaKeyedProperty r2, [1]
         0x189603a3c3c1 @   15 : 92 12             JumpIfToBooleanTrue [18] (0x189603a3c3d3 @ 33)
         0x189603a3c3c3 @   17 : 12 02             LdaConstant [2]
         0x189603a3c3c5 @   19 : 26 f9             Star r2
         0x189603a3c3c7 @   21 : 25 fb             Ldar r0
 1137 E> 0x189603a3c3c9 @   23 : 32 f9 03          Add r2, [3]
         0x189603a3c3cc @   26 : 26 f9             Star r2
         0x189603a3c3ce @   28 : 12 03             LdaConstant [3]
 1144 E> 0x189603a3c3d0 @   30 : 32 f9 04          Add r2, [4]
         0x189603a3c3d3 @   33 : 26 fa             Star r1
 1169 S> 0x189603a3c3d5 @   35 : a5

If you look at the switch vs if/else,… you’d see they’re almost identical. Each case is being tested and jumping only when it evaluates to true.

Whereas with the object lookup it jumps straight to the key.