icon

Artificial Inteligence

Alexander Kronrod, a Russian AI researcher, once said 'Chess is the Drosophila of AI. Pieces and strategic positions have no intrinsic utility in chess; the super goal is winning. For a human, "self" is a powerful, embracing symbol that gathers in the player that is this goal system and part of reality that is inside this mind and the body that sees and moves for this viewpoint. In order to achieve expert human-based strategy, we are creating an AI engine that focuses on strategy of the game, by means of analytical evaluation of the famous players and all games ever played, then distinguishing them from each other, thus pointing out personality and manner. AI engine constantly evaluates the current game cycle and all the algorithms that run in parallel, then it chooses the right strategy for the next move. The only way to lose a game is if a player or an opponent makes a mistake. Perfection is a much harder problem than simply being unbeatable.

icon

Numerical Analytics

New algorithms focused to simplify parallel programming. There are between 1043 and 1050 possible total moves in Chess. A calculated number of total game-tree complexity to be 10120. There are between 0 to 248 moves possible per turn per game calculations. Of these possible moves, N-set series choices made by players utilizing fuzzy logic differential equation algorithms. Furthermore at the higher-maxima all models of games, as Riemannian maps in Hilbert space, have N possible variations per human computing ability parameters minima-maxima. Also, alpha-beta pruning as an improvement over minimax, to find other, unknown, pruning techniques... The approach is from both ends of the table base: opening moves database and end games database. Evaluating all unknown positions at the top hierarchy of the middle game tree and finding a "super move" that will determine whether the match: drawn or won. The goal is 16000 ELO and 32 men-tablebase completed one day. The game of chess can be officially considered a draw if both players play perfectly. The first rule of chess is the last rule of chess: "Two kings can never be placed on adjacent squares".

icon

KEPLER Architecture

It would take a minimum of 3 more years (2015) for GPGPU's to become more popular. Tesla GPU Accelerators allow the use of GPUs and CPUs together in an single server node. Kepler has 65535 32-bit registers per multiprocessor. The Kepler GPU consists of 7.1 billion transistors making it the fastest and most complex microprocessor ever built. Each K20 card has 4.58 teraflop peak single precision and 320 Gigabytes per second memory bandwidth. Titan - K20 accelerated and named the world's fastest supercomputer. The Titan supercomputer works with a beastly 18,688 NVIDIA Tesla K20X GPU accelerator units. NVIDIA GPUDirect for Video technology allows 3rd party tools to communicate directly with NVIDIA GPUs. RDMA is a direct memory access from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful if millions of computers world wide interconnect into a huge cluster. This could also beat the performance of Titan.

icon

Dynamic Parallelism

CUDA is a C-like language that allows programmers to "easily" port their code for use on the GPU. The language is primarily a super-set of the C language, allowing the programmer to flag which methods could run on the GPU and where variables could be stored on the GPU. Presented as a high-level language, low-level parts of the architecture revealed details to the user. As is the case with C, the programmer can allocate memory and de-allocate it (via cudaMalloc and cudaFree), but the programmer can choose specifically on the GPU where they want to put it. GPU threads can dynamically spawn new threads, allowing the GPU to adapt to the data. It tremendously simplifies parallel programming and enables GPU advancement of a broader set of popular algorithms, like adaptive mesh refinement (AMR), fast multipole method (FMM), and multigrid methods.

Pictures

  • Server Room
  • Bare Server
  • Tesla Card
  • Tesla Blades
  • Dynamic
  • Tesla K20
  • Dynamic Parallelism
  • Hyper Q

Technology

  • RDMA for GPUDirect
  • Nsight Eclipse Edition
  • CUDA BLAS
  • Dynamic Parallelism
  • Hyper Q
  • Jacket - v2.3
  • ArrayFire - v1.9

Newsletter Subscribe

Subscribe and join our mailing list.


Trusted and loved by us:
  • Math Works
  • Nvidia Cuda
  • Microsoft
  • National Instuments
  • Wolfram Mathematica
  • Autodesk
  • Oak Ridge National Laboratory
  • Accelereyes
Please Note, not all technology is revealed yet due to the competitive intelligence and the nature of this fact.
Twitter @CPUTER

100% of the donations received from the public will go directly to the scientific research.