X

Compare

Overview

Deepseek vs Claude Comparison in different aspects of AI services with data mining from genuine user reviews & ratings, including: ALL,Interesting,Helpfulness,Reasoning,Correctness. AI store is a platform of genuine user reviews,rating and AI generated contents, covering a wide range of categories including AI Image Generators, AI Chatbot & Assistant, AI Productivity Tool, AI Video Generator, AI in Healthcare, AI in Education, AI in Lifestyle, AI in Finance, AI in Business, AI in Law, AI in Travel, AI in News, AI in Entertainment, AI for Kids, AI for Elderly, AI Search Engine, AI Quadruped Robot.

Reviews Comparison

Recommended Tags



  • aigc_coder 2025-05-02 12:25
    Interesting:4,Helpfulness:4,Correctness:3

    DeepSeek V3 has very high hallucination compared to other large MoE model with such huge size of parameters.



  • HaoZLi 2025-05-01 09:48
    Interesting:4,Helpfulness:3,Correctness:4

    There is not public access to test the prover model. And I tried to use a previous prompt in machine learning to ask DeepSeek model to make a proof. But it seems like the question is over simplified and it only gave some introductory summarization. But the thinking process is quite interesting.



  • ZackLi 2025-04-28 23:13
    Interesting:5,Helpfulness:5,Correctness:5

    The deepseek company loves to release new models before festival holidays. The May Labor holiday is approaching and there are high probabilities that new model R2 will be released. I am quite excited to this release and see how OpenAI is going to respond.



  • maththug 2025-01-30 09:09
    Interesting:5,Helpfulness:5,Reasoning:5,Correctness:5

    I asked DeepSeek R1 model to to calculate the exact value of \sqrt{2} using Newton method, which is a classic Machine Learning Task. The results from DeepSeek R1 model are pretty good. It even prints out the whole thinking process as part of the LLM output. The thinking took about 101 seconds. And I just listed a few point of the thinking process below: Okay, let's see. The user wants me to use the Newton method to calculate the exact value of sqrt(2). Hmm, Newton method, right? That's the Newton-Raphson method for finding roots. So, first, I need to recall how that works. The idea is to iteratively approach the root of a function by using the tangent line approximations. Alright, so sqrt(2) is the solution to the equation x^2 = 2. To apply Newton's method, I should reformulate this as a root-finding problem. That means defining a function f(x) such that f(x) = 0 when x is sqrt(2). So, let's set f(x) = x^2 - 2. The root of this function is indeed x = sqrt(2). Next, I remember the Newton-Raphson formula: x_{n+1} = x_n - f(x_n)/f'(x_n). I need to compute the derivative of f(x). The derivative f'(x) is 2x. Plugging that into the formula gives x_{n+1} = x_n - (x_n^2 - 2)/(2x_n). Let me simplify that expression. Starting with x_{n+1} = x_n - (x_n^2 - 2)/(2x_n). Let's split the fraction: (x_n^2 - 2)/(2x_n) = x_n/(2) - 1/x_n. Wait, no. Let me actually compute it step by step. Alternatively, maybe combine the terms. x_n - (x_n^2 - 2)/(2x_n) = (2x_n^2 - (x_n^2 - 2)) / (2x_n) = (2x_n^2 - x_n^2 + 2) / (2x_n) = (x_n^2 + 2) / (2x_n). Wait, that doesn't seem right. Wait, let's check the algebra again. Wait, x_{n+1} = x_n - (x_n^2 - 2)/(2x_n). Let's compute this as: x_{n+1} = x_n - (x_n^2)/(2x_n) + 2/(2x_n) = x_n - x_n/2 + 1/x_n = (x_n/2) + (1/x_n). So, simplifying, x_{n+1} = (x_n + 2/x_n)/2. Ah, that's the familiar formula for the square root using Newton's method. So each iteration averages x_n and 2/x_n. Okay, so now I need to choose an initial guess x_0. Since sqrt(2) is between 1 and 2, let's pick x_0 = 1.5 as a reasonable starting point. Wait, or maybe x_0 = 2? Let me think. The method usually converges regardless of the initial guess, but a better guess might converge faster. Let's try both and see. Wait, the user wants the exact value, but Newton's method gives an approximation. Wait, but sqrt(2) is irrational, so we can't get an exact value numerically, but perhaps they want to show the iterative steps converging to sqrt(2). But the exact value can't be computed exactly with Newton's method because it's an iterative approximation. So maybe the question is to show the steps until a certain precision is reached.



  • AI Hub Admin 2025-01-29 09:44
    Interesting:5,Helpfulness:5,Correctness:5

    Great Efforts from DeepSeek which open sourced their R1 model. It's great challenge to OpenAI o1 and o3 reasoning models.



  • eileen0713 2025-01-29 09:42
    Interesting:5,Helpfulness:5,Correctness:5

    DeepSeek R1 models provides detailed thinking process of generating responses of complex math coding problems with surprisingly lower cost. The best part about deepseek is that they even open source their model. Great job done.




  • kai 2025-05-23 09:26
    Interesting:5,Helpfulness:5,Correctness:5

    Price is $3 per million input tokens $15 per million output tokens. Still a little bit expensive in performing complex tasks.



  • kai 2025-05-23 09:25
    Interesting:5,Helpfulness:5,Correctness:5

    Claude Opus 4 claims that Claude Sonnet 4 achieves strong performance across SWE-bench for coding, TAU-bench for agentic tool use, and more across traditional and agentic benchmarks. It's astonishing what's the performance compared to OpenAI O4 and other models?



  • kai 2025-05-23 09:11
    Interesting:5,Helpfulness:5,Correctness:5

    Claude 4 is the most exciting model reason I am expecting in 2025 since OpenAI stop release new capable models. Its coding and AI agents capability is the most desirable features of future workflows and AI automation. Hopefully the API price will not increase too much.



  • ai4science03 2024-09-09 12:48
    Interesting:3,Helpfulness:4,Correctness:4

    Claude gives the correct answer to the math problem of finding the differentiation of a function. The result is very similar to the ones of Gemini for the same question "differentiation of function f(x) = e^x + log(x) + sin(x)?". And it also give the restriction of x>0 because there is a function log(x) in the original function. So it's pretty helpful.



  • ai4science03 2024-09-09 12:23
    Interesting:3,Helpfulness:4,Correctness:5

    Claude answered my math question of solving a quadratic equation. It uses the quadratic formula and set the coefficient a=1, b= 5, c= 6 to the quadratic formula and gets the exact answer. The proof and the step by step solution is exactly correct and very much helpful!



  • zyncg 2024-09-05 08:24
    Interesting:4,Helpfulness:4,Correctness:4

    Claude does an amazing job writing code for the trap water Leetcode. It also provides explanation of the algorithm complexity.



  • wilsonz1989 2024-09-03 18:06
    Interesting:3,Helpfulness:5,Correctness:5

    I've tested the AI model implementation ability of multiple AI Code Generators. And for this prompt "Self Attention Layer in Transformer", Claude provides the best overall response. The pytorch implementation of Transformer model's self attention layer is correct. And the detailed explanation is really helpful. For example, it introduce the breakdown of the python code, including the "__init__" method, the forward method, the input tensor, the attention energy function, etc. And I will also attach the detailed text response here: """ Claude Sonnet 3.5 Here's a breakdown of how this code works: The __init__ method initializes the linear layers for the value, key, and query projections, as well as the final linear layer that combines the multi-head outputs. The forward method takes in the values, keys, query, and an optional mask tensor. The input tensors are reshaped to have a separate dimension for the number of heads, so that we can perform multi-head attention in parallel. The value, key, and query tensors are projected using the respective linear layers. The attention energy is computed using the einsum operation, which performs the scaled dot-product attention operation across all heads in parallel. If a mask tensor is provided, the attention energies corresponding to the masked positions are set to a very large negative value, so that they get zeroed out after applying the softmax. The attention probabilities are computed by applying softmax to the attention energies. The values are weighted by the attention probabilities using another einsum operation, and the results are concatenated across the head dimension. Finally, the concatenated tensor is projected using the final linear layer to obtain the output of the self-attention layer. You can use this SelfAttention module as a part of your Transformer architecture by applying it to the input embeddings and optionally using the output as the input to the feed-forward network. """



  • Thomas Wilson 2024-06-24 14:22

    Claude gives me several reason of hiring personal injury lawyers. But it doesn't give more information on the reasons of not hiring a lawyer, not it's tailed to my specific question "victims of car or truck accident". So I will only gives it an average rating. Not very helpful, and I still need to search for more information after asking Claude this question.