The best paper of ICLR is mind blowing 🤯🤖they proved that within every neural network there exists a far smaller one that can be trained to achieve the SAME performance as its oversize parent = LESS computation 😮🥰 so we might have done it wrong this whole time!! Apparently 10 to 100 times too big 😅 With a large network we kind of protect ourself from a bad random initialisation but in the end all the nodes are not really contributing that much to the actual result. So during training the team cut off the “unnecessary” nodes in the network, making it smaller and smaller. Then that continued to train the network that got smaller and smaller. You should read the paper if you’re interested 🥰
Long weekends are always a good opportunity to do those chores you just haven’t gotten around to. Whether it’s a design update, or a DIY project, don’t let the extra day off go without a spending a little time taking care of your home.
2 days ago
Are You The Next #Balacque #Dyamond? See yourself in a new #light by attending the Dyamond Club #Casting Call Event Saturday – June 15-16, 2019 in #Richmond, #Virginia.
TO #APPLY: [Link in Bio] or send a #DM to get the direct link!
Know someone who would #shine bright like a Dyamond? TAG EM!
Follow @dyamondclubvip to stay up to date on the most recent #developments.