고객 서비스

판매자들은 가입하면 더 많은 혜택을 누릴 수 있습니다.

모든 주문 무료 배송.

미니

GMKtec EVO-X2 AI Mini PC Ryzen Al Max+ 395 (up to 5.1GHz) Mini Gaming Computers, 128GB LPDDR5X 8000MHz (16GB*8) 2TB PCIe 4.0 SSD, Quad Screen 8K Display, WiFi 7 & USB4, SD Card Reader 4.0

4.10

지난 32시간 동안 판매됨 35

$ 2,299.98
$ 2,305.00
-0.22%

Where do I start. This thing is fantastic and configurable. I have the 128 GB variant and it nice to be able allocate 96GB to Video ram and still have 32 for the system. Runs quiet unless pushing it and even then it's tolerable. AI power is where it's at with this box. I havent really gamed on it, so cant speak for that.When desk space is at a premium and you've outgrown your big RGB lit liquid cooled case with a 1200 watt power supply chugging along at moderate 400 watts, you get the fastest consumer grade AI micro computer on the market that will put the giant to shame. Throw in a 4TB nvme on top of the 2 it comes with and you have a powerhouse of a box imo.It been about 8 years since the last computer I built, so was during days.

17 현재 누군가가 웹사이트를 탐색 중입니다.

Size
128GB LPDDR5X + 2TB 96GB LPDDR5X + 1TB
수량:
- +

예상 배송 시간: 12~26일(국제), 3~6일(미국)

구매일로부터 45일 이내에 반품이 가능합니다. 관세 및 세금은 환불되지 않습니다.

  • ASIN:

    B0F53MLYQ6

  • BEST_SELLERS_RANK:

    #1,392 in Computers & Accessories (See Top 100 in Computers & Accessories) #47 in Mini Computers

  • BRAND:

    GMKtec

  • CARD_DESCRIPTION:

    Integrated

  • CHIPSET_BRAND:

    AMD

  • COLOR:

    Silver

  • COMPUTER_MEMORY_TYPE:

    DDR5 RAM

  • CUSTOMER_REVIEWS:

    4.1 4.1 out of 5 stars (59) 4.1 out of 5 stars

  • DATE_FIRST_AVAILABLE:

    April 16, 2025

  • FLASH_MEMORY_SIZE:

    64 MB

  • GRAPHICS_CARD_RAM_SIZE:

    128 GB

  • GRAPHICS_COPROCESSOR:

    AMD Radeon 8060S Graphics 40Cores RDNA3.5

  • HARD_DRIVE:

    2 TB PCIe 4.0 M.2 2280 SSD Dual Slots Max.4TB Each Slot

  • HARD_DRIVE_INTERFACE:

    PCIE x 16

  • ITEM_MODEL_NUMBER:

    EVO-X2

  • ITEM_WEIGHT:

    7.21 pounds

  • MAX_SCREEN_RESOLUTION:

    7680x4320

  • MEMORY_SPEED:

    8000 MHz

  • NUMBER_OF_PROCESSORS:

    16

  • NUMBER_OF_USB_2_0_PORTS:

    2

  • NUMBER_OF_USB_3_0_PORTS:

    3

  • OPERATING_SYSTEM:

    Windows 11 Pro

  • PACKAGE_DIMENSIONS:

    15.12 x 9.84 x 3.7 inches

  • PROCESSOR:

    5.1 GHz ryzen_ai_max

  • PROCESSOR_BRAND:

    AMD

  • RAM:

    128 GB LPDDR5X 8000MT/S

  • SCREEN_RESOLUTION:

    3840 x 2160

  • SERIES:

    EVO-X2

  • STANDING_SCREEN_DISPLAY_SIZE:

    75

  • WIRELESS_TYPE:

    2.4 GHz Radio Frequency, 5 GHz Radio Frequency, 5.8 GHz Radio Frequency, 802.11ax, Bluetooth

안전한 결제를 보장하세요
제품 상세 정보
1 Year War~ranty, 24Hr Support. Amazing discounts for Business Buyers. Follow our brand for deals and updates. Shenzhen GMK Technology Co., Ltd is a high-tech enterprise integrating scientific research, development, production and sales of Mini PC and Computer Accessories. Our company was established on June 6, 2019 and our core team is composed of engineers with 20 years of experience in consumer electronics. It was our shared dream that brought us together as GMKtec. GMKtec manufactures professional Mini PC computers. GEEK | We aim to provide the ultimate product experience for users around the world. MODERN | We aim to provide users with cutting-edge technology products. (K) Creative | We always adhere to the innovative spirit of keeping pace with the times. The GMKtec mini PCs are widely used at office, home theater and bussiness. All computers support customer customization. Equipped with the latest AMD Ryzen & Intel Processories. Built-in GPU for smooth framerate and gameplay.
고객 리뷰
4.10

(100 점수)

5
9
4
6
3
6
2
20
1
59
댓글 취소
댓글을 남겨주세요

14 논평

cked this up on a whim after seeing the specs—Ryzen AI Max+ 395, 128GB LPDDR5X RAM, 2TB SSD, and that Radeon 8060S iGPU that supposedly trades blows with a desktop 4060. At $2k, I was sweating bullets. Thought, “Is this mini PC hype or actual magic?” Spoiler: it’s magic.Setup was a breeze—Windows 11 Pro out of the box, though I nuked the bloatware in 10 minutes. Boot time? Under 5 seconds. Now, the performance: holy crap. I’m running 4K edits in Premiere without breaking a sweat, local LLMs (70B params at 20+ tokens/sec), and games like Cyberpunk at 1440p ultra hitting 80-100 FPS with FSR. Multitasking? 50 tabs, Discord, OBS streaming—zero lag. Fans whisper until you hammer it, then it’s a gentle hum at 85-90°C max.Ports are a dream: dual USB4 for 40Gbps docks, HDMI/DP for quad 8K, WiFi 7 that’s faster than my fiber, and that SD 4.0 reader is a godsend for photographers. Storage? Add another M.2 if you need it; I haven’t touched mine.Downsides? Power brick is a brick (literally, it’s huge). And at this price, it better be flawless—GMKtec’s 1-year warranty helps, but I’d love extended options. Oh, and it’s soldered RAM, so commit upfront.Two months in, my old tower’s in the closet. If you’re building a portable workstation, AI rig, or just want a space-saving monster, this justifies the hit to your wallet. GMKtec just redefined “mini” for high-end. Grab it if you can swing the cost—you won’t regret it.

Expensive but I love it
2026-02-21    -dan

Great machine for self hosting a variety of LLMs. Also good for a gaming pc on the side too. Ill likely host some high cpu docker containers as well which are currently choking an N95 mini pc.Only gripe is the fans are kinda loud and there's no custom fan curve in the bios , just 20% steps and you can't set the temp values, so there's a a fair amount silence then FANNNNN then silence.

Where do I start. This thing is fantastic and configurable. I have the 128 GB variant and it nice to be able allocate 96GB to Video ram and still have 32 for the system. Runs quiet unless pushing it and even then it's tolerable. AI power is where it's at with this box. I havent really gamed on it, so cant speak for that.When desk space is at a premium and you've outgrown your big RGB lit liquid cooled case with a 1200 watt power supply chugging along at moderate 400 watts, you get the fastest consumer grade AI micro computer on the market that will put the giant to shame. Throw in a 4TB nvme on top of the 2 it comes with and you have a powerhouse of a box imo.It been about 8 years since the last computer I built, so was during days.

UPDATE: I am still loving this machine. Its power efficiency is off the charts and it plays a whole bunch of games at totally reasonable framerates. This is my go-to machine when I want to reduce heat in my office during the summer or when I dont plan to play a game that pushes beyond its capacity. I know its built for AI but this machine is also a game changer when it comes to integrated graphics gaming. Its on a whole different scale compared to any other current offering. Sure it cant compete with a discrete card in raw performance in most areas but it sure can achieve a majority of the speed with a small fraction of the power.ORIGINAL: Performance is excellent. Im scoring 12k CPU and GPU on Timespy and all the while drawing only 150W. Stable Diffusion performance is amazing per watt but still nothing compared to my 14900K/4090, not that it is expected, though it seems to be roughly 3x the power efficiency which is definitely impressive. I left the RAM set at 64GB allotted for the GPU (VRAM) and so far even with Stable Diffusion its not used more than 28GB.Temps stock under maximum load hit thermal throttle by a very small margin but after an upgrade to PTM7950 it wasn't throttled once. Thermal paste was a bit like cake frosting so its definitely advised to replace, especially with PTM7950. If you run on stock paste, maybe leave the UEFI/BIOS power setting on balanced to keep temps in check. My system came with an Adata Legend 900 2TB NVMe SSD which performs pretty decent so I recommend leaving it as the boot drive. I added a 4TB Samsung 990 Pro for my working drive.

Best strix halo option in SG
2026-02-18    -It's me

Best strix halo option. Local SG warranty, decent performance and specs, avoids Intel 10GigE instability that plagues alternatives, and a decent price for all this

Best strix halo option in SG
2025-12-24    -It's me

Best strix halo option. Local SG warranty, decent performance and specs, avoids Intel 10GigE instability that plagues alternatives, and a decent price for all this

UPDATE: I am still loving this machine. Its power efficiency is off the charts and it plays a whole bunch of games at totally reasonable framerates. This is my go-to machine when I want to reduce heat in my office during the summer or when I dont plan to play a game that pushes beyond its capacity. I know its built for AI but this machine is also a game changer when it comes to integrated graphics gaming. Its on a whole different scale compared to any other current offering. Sure it cant compete with a discrete card in raw performance in most areas but it sure can achieve a majority of the speed with a small fraction of the power.ORIGINAL: Performance is excellent. Im scoring 12k CPU and GPU on Timespy and all the while drawing only 150W. Stable Diffusion performance is amazing per watt but still nothing compared to my 14900K/4090, not that it is expected, though it seems to be roughly 3x the power efficiency which is definitely impressive. I left the RAM set at 64GB allotted for the GPU (VRAM) and so far even with Stable Diffusion its not used more than 28GB.Temps stock under maximum load hit thermal throttle by a very small margin but after an upgrade to PTM7950 it wasn't throttled once. Thermal paste was a bit like cake frosting so its definitely advised to replace, especially with PTM7950. If you run on stock paste, maybe leave the UEFI/BIOS power setting on balanced to keep temps in check. My system came with an Adata Legend 900 2TB NVMe SSD which performs pretty decent so I recommend leaving it as the boot drive. I added a 4TB Samsung 990 Pro for my working drive.

The largest AI model I am able to load onto my GMKtec EVO-X2 into the LM Studio software outputs about 8 tokens/second:Qwen3-235B-A22B-Instruct-2507-gguf-q2ks-mixed-AutoRound-incThis mixture-of-experts AI model is also known as:Qwen3-235B-A22B-Instruct-2507-128x10B-Q2_K_S-00001-of-00002.ggufThis is a 235 Billion parameter model with Q2_K_S automatic Intel quantization. I am able to load this model with 96GB of memory dedicated to VRAM and the following model config in LM Studio 0.3.20:GPU Offload: 80 / 94Offload KV Cache to GPU Memory: NO (slider to the left)Keep Model in Memory: NO (slider to the left)All other model config settings are at their default values, including the attempt to use mmap option, which seems to be necessary to load this particular model.In the magnifying lens / Runtime window, I had to select:GGUF: ROCm llama.cpp (Windows) v1.42ROCm allows more layers to be offloaded to the GPU and allows this large model to be faster than the Llama 3.3 70B parameter model, which outputs about 5 tokens/second with Vulkan and with the test prompt, "Why is the sky blue?" The smaller models seem to run faster with Vulkan.When you reboot the computer, you can repeatedly press the Esc key to enter the BIOS configuration and set 96GB for the graphics memory and activate Performance mode.I learned that it is possible to activate the High Performance mode in Windows 11 by right clicking in the Windows menu and selecting:Terminal (Admin)A command prompt will then open. You can then enter:powercfg -setactive SCHEME_MINI then went into the Control Panel / Power Options to change the advanced power settings for the "High performance" plan which was now unhidden. I set the "Turn off hard disk after" option to Never. I set the "Sleep after" option to Never. I set the "Minimum processor state" to 100%.The output speed of Qwen3 235B increased to 8.7 to 8.8 tokens/second for the test question, "Why is the sky blue?" This output rate is faster than the rate at which I can read when I think about what the AI model is writing. Smaller AI models might output answers faster, but the answers tend to be wrong with greater frequency than larger AI models. Wrong answers delivered fast are useless to me.Qwen3 235B sometimes outputs gibberish, such as, "GGGGG..." I don't know if this is due to a memory shortage, a context window that is too small, a result of the quantization, or something else. I don't seem to have enough memory left to increase the context windows size from the default 4,096. Keeping prompts short on one line seems to help prevent this issue.08/04/2025 Update: I was able to increase the context window size by setting the options:Context Length: 262144Evaluation Batch Size: 256Flash attention: YES (slider to the right)The default Evaluation Batch Size of 512 seems to cause an eventual crash, but 256 seems stable. The speed of Qwen3 235B gradually slows down as the context window fills up. After about 27,000 tokens are in the context window, the output speed drops to about 1 token/second, so 27,000 is about the practical limit for the context length unless you are patient.I deleted OneDrive and the Recall feature. In the Edge browser, I disabled the Settings / System and performance / System / "Startup boost" option to prevent Edge from staying resident in memory all the time. I also disabled the option for "Continue running background extensions and apps when Edge is closed". I ran the msconfig program to disable the Print Spooler and some other print-related process which I will never use on this computer.8/5/2025 Update: I was able to load the new AI model:openai/gpt-oss-120bThe GGUF files come in 2 parts:gpt-oss-120b-MXFP4-00001-of-00002.ggufgpt-oss-120b-MXFP4-00002-of-00002.ggufThis is a 120 Billion parameter GPT Open Source Software version from OpenAI with MXFP4 quantization. The size listed in LM Studio is 63.39GB.In the BIOS, I had to select 64GB dedicated to graphics memory. I am still able to load Qwen3 235B, but the output speed drops to about 7 tokens/second for the test question, "Why is the sky blue?" For Qwen3 235B, it seems better to have 96GB dedicated to graphics memory. The output speed of openai/gpt-oss-120b is about 10 tokens/second for the same question.The following is the openai/gpt-oss-120b model config in LM Studio 0.3.21:Context Length: 30000GPU Offload: 36 / 36Evaluation Batch Size: 4Offload KV Cache to GPU Memory: NO (slider to the left)Keep Model in Memory: NO (slider to the left)Try mmap: NO (slider to the left)All other model config settings are at their default values. Attempting to offload the KV cache to GPU memory results in a failure to load. Attempting to activate Flash memory results in a failure to load.Setting the mmap option to NO seems to result in much more RAM being available once the model is loaded, dropping from about 57GB used to less than 5GB used according to the number in the lower right corner of LM Studio.In the Magnifying lens (Discover) / Runtime window, I had to select:GGUF: Vulkan llama.cpp (Windows) v1.44.0Attempting to use the ROCm llama.cpp v1.43.1 causes the openai/gpt-oss-120b model not to load. So, if I switch between loading Qwen3 235B and openai/gpt-oss-120b, I must choose the version of llama.cpp that works with each model.8/7/2025 Update: I was occasionally getting an error in GPT after it had finished outputting a response. The error message was something like: "Unexpected end of grammar stack after receiving input: G". Repeated outputs of the letter G, such as, "GGGGG...", seems to be related to the Evaluation Batch Size, which seems to be stable at a value of 4 for the GPT model. I used a variation of the following prompt and kept dividing the Evaluation Batch Size by 2 until I no longer got the error message:Please write a story about people who wake up from suspended animation in another star system. The study of Arctic and Antarctic fish allowed scientists to develop an artificial protein that inhibits ice formation. The people left Earth due to warfare and did not have time to prepare. The ship's AI still thinks it is near Earth in the past, because that is when the AI was last updated while its atomic clock still functioned. The atomic clock was hit with shrapnel in orbit, caused by anti-satellite weapons, that was recorded as a micro-meteorite strike. Automated systems sealed the hole in the hull, but the clock remained non-functional. Any watches worn by the people in cryochambers caused frost injuries to the skin where bare metal contacted skin. The watches were broken by the cold in the cryochambers. Any watches that were stored outside the cryochambers either have depleted batteries or have not been regularly wound by movement in the case of automatic watches. Analog watch hands are halted at various time indications. An automated routine in the cryochambers wakes the crew up independently of the main AI system. Any views of the outside are through cameras and sensors which are controlled by the AI. A crew member tells the AI system to start up the main systems, but the AI system refuses to do so, because the AI refuses to accept that time has passed and thinks the crew is discussing a hypothetical scenario. The crew must either convince the AI that time has passed or somehow bypass the AI's control over the ship to survive. Bypassing the AI will result in loss of navigation and dynamic fusion containment. The AI shows the crew a picture of the Earth before they reached escape velocity as if the Earth were outside in the present. The image causes one crew member to believe that they are still on Earth. The crew member's mental illness had been well treated on Earth before the war made prescription refills impossible. The crew attempt to improvise a new clock for the AI, but the AI recognizes the inaccuracy of the clock and ignores the clock. The crew attempt to manually restart the fusion reactor but abandon the attempt when they almost lose containment. All hope seems to be lost. However, the improvised clock allows the AI's predictions, which are analogous to dreams or hallucinations in humans, to move forward in time. Previously, the AI's predictions had all been overlaid on top of each other at the same instant in time. The AI decides on its own to search for a pulsar as a time reference.8/10/2025. I feel the need to respond to Bill Bohn's review of a computer which he apparently does not own. There is nothing misleading in my review. The numbers are what they are. If you actually had a GMKtec EVO-X2, then you could reproduce my numbers. Moonshot Kimi-K2 is simply wrong when it writes, "The reviewer is running a GMKtec EVO-X2 mini-PC that houses a Ryzen 9 8945HS (8 Zen 4 cores + Radeon 780M iGPU) plus 96 GB of system DDR5 and BIOS UMA set to 96 GB." Moonshot Kimi-K2 is incorrectly identifying the processor, the graphics unit, and the amount of memory the GMKtec EVO-X2 computer has. The processor is a Ryzen AI Max+ 395 which has 16 processor cores, not 8. The graphics processor is an integrated AMD Radeon 8060S, not 780M. The GMktec EVO-X2 has 128GB of total system memory, not 96GB. A maximum 96GB of the 128GB total system memory can be allocated for VRAM. Why are you reviewing a computer that you apparently do not own and apparently know nothing about? Do not blindly accept whatever information an AI system gives to you.8/11/2025. I have changed the previous Evaluation Batch Size from 16 to 4 for the gpt-oss-120b model. This seems to be most stable setting so far.8/17/2025. The gpt-oss-120b model from OpenAI now outputs 36 to 40 tokens/second for the prompt, "Why is the sky blue?" This speed increase was made possible by selecting the new ROCm llama.cpp v1.46.0 and choosing 96GB of memory to be allocated to video memory in the BIOS. Thanks to whomever made this dramatic speed increase possible. What's funny is gpt-oss-120b tells me it is impossible for consumer-level computer hardware to achieve this speed.In the gpt-oss-120b settings, I set:Context Length: 63000GPU Offload: 36/36Evaluation Batch Size: 256Offload KV Cache to GPU Memory: YES (slider to the right)Flash Attention: YES (slider to the right)In the magnifying lens / Runtime setting, I set:GGUF: ROCm llama.cpp (Windows) v1.46.0All other parameters are at their default settings for gpt-oss-120b.Qwen3 235B now outputs about 9 to 10 tokens/second. The new version of ROCm llama.cpp v1.46.0 seems to allow 81 layers to be offloaded to the GPU instead of 80 layers. My model settings for Qwen3 235B are now:Context Length: 30000GPU Offload: 81/94Evaluation Batch Size: 128Offload KV Cache to GPU Memory: NO (slider to the left)Keep Model in Memory: NO (slider to the left)Flash Attention: YES (slider to the right)In the magnifying lens / Runtime setting, I set:GGUF: ROCm llama.cpp (Windows) v1.46.0All other parameters are at their default settings for Qwen3 235B.Currently, there seems to be no substitute for experimenting with the various parameters to get a stable configuration in LM Studio. The parameters that AI models recommend do not work. I recommend dividing the Evaluation Batch Size by 2 if you get gibberish from one of these large AI models or if the AI model crashes.8/26/2025. After upgrading to AMD Adrenalin 25.8.1, the gpt-oss-120b AI model stopped working with ROCm llama.cpp v1.46.0 in LM Studio 0.3.23. I am still able to use Vulkan llama.cpp v1.46.0 but at a reduced output speed of about 33 tokens/second for the test prompt, "Why is the sky blue?" You might want to set a System Restore point before upgrading the AMD Adrenalin software so that you can go back to a faster configuration if the upgrade degrades inference speed.9/10/2025. I installed a second 4TB NVME drive made by Western Digital after removing the rubber feet to reveal the screws. I followed the instructions of gpt-oss-120b for running a free Windows program named Rufus to make a bootable USB drive with a Debian installation *.iso file extracted to the USB drive and booting from the drive. Everything was going well until I had to enter my wireless network password. After repeatedly entering the correct password, the installation would not advance beyond the DHCP configuration. Updating the computer's BIOS did not help. I discovered I had to select, "None", as the driver for my wired network adapter. That option comes up somewhere in the non-graphical installation process. The wireless network Debian installation then worked.The gpt-oss-120b AI model outputs at 47 tokens/second in LM Studio 0.3.25 running in Debian Linux with the Gnome graphical interface for the prompt, "Why is the sky blue?"Gnome has an option for High Performance mode, similar to Windows, which I activated.I set the magnifying glass / Hardware / Memory Limit value to 110.The Runtime / GGUF setting is Vulkan llama.cpp (Linux) v1.50.2. For some reason, the ROCm llama.cpp v1.50.2 program does not work in Debian Linux with the EVO-X2 computer at present.The gpt-oss-120b model settings in LM Studio in Debian Linux are:Context Length: 30000GPU Offload: 36/36Evaluation Batch Size: 127Keep Model in Memory: NO (slider to the left)Try mmap(): NO (slider to the left)Flash Attention: YES (slider to the right)All other gpt-oss-120b model parameters are at their default settings.10/1/2025. LM Studio 0.3.27 running in Debian "Trixie" Linux automatically installed ROCm llama.cpp (Linux) v1.52.0 and selected that as the engine for GGUF even though ROCm is currently not supported in the Debian "Trixie" edition. The Vulkan llama.cpp (Linux) v1.52.0 apparently uses more memory than v1.50.2 and causes the output speed of gpt-oss-120b to plummet to around 2 tokens/second. This is a step in the wrong direction. Luckily, I could scroll down in the All tab of the Runtime window and click on the "..." next to Vulkan llama.cpp (Linux) and select, "Manage Versions", and delete Vulkan llama.cpp v1.52.0 to make v1.50.2 the default. I also had to uncheck the, "Auto-update selected Runtime Extension Packs".

cked this up on a whim after seeing the specs—Ryzen AI Max+ 395, 128GB LPDDR5X RAM, 2TB SSD, and that Radeon 8060S iGPU that supposedly trades blows with a desktop 4060. At $2k, I was sweating bullets. Thought, “Is this mini PC hype or actual magic?” Spoiler: it’s magic.Setup was a breeze—Windows 11 Pro out of the box, though I nuked the bloatware in 10 minutes. Boot time? Under 5 seconds. Now, the performance: holy crap. I’m running 4K edits in Premiere without breaking a sweat, local LLMs (70B params at 20+ tokens/sec), and games like Cyberpunk at 1440p ultra hitting 80-100 FPS with FSR. Multitasking? 50 tabs, Discord, OBS streaming—zero lag. Fans whisper until you hammer it, then it’s a gentle hum at 85-90°C max.Ports are a dream: dual USB4 for 40Gbps docks, HDMI/DP for quad 8K, WiFi 7 that’s faster than my fiber, and that SD 4.0 reader is a godsend for photographers. Storage? Add another M.2 if you need it; I haven’t touched mine.Downsides? Power brick is a brick (literally, it’s huge). And at this price, it better be flawless—GMKtec’s 1-year warranty helps, but I’d love extended options. Oh, and it’s soldered RAM, so commit upfront.Two months in, my old tower’s in the closet. If you’re building a portable workstation, AI rig, or just want a space-saving monster, this justifies the hit to your wallet. GMKtec just redefined “mini” for high-end. Grab it if you can swing the cost—you won’t regret it.

AI workhorse
2026-01-10    -Turtle

This is an amazing machine for value/price. 128GB of memory with a GPU, and performance to run some very large LLMs locally. Energy efficient. Quiet. It scratches an itch.I wish that the network interface was better (SFP+ would have been nice, or at least 10G copper), and throwing in oculink would have been welcome for future expansion. Still, this one is worth it.

댓글을 남겨주세요:

댓글 제목
댓글 내용
배송 및 반품
우리가 당신을 보호하겠습니다

대부분 지역에서는 배송비를 한 번만 지불하시면 됩니다 (주문 및 배송 페이지를 확인해 주세요).

구매 후 14일 이내 무료 반품 가능 (단, 수명이 다한 제품, 맞춤 제작 상품, 마스크, 향수 및 에어로졸과 같이 위험하거나 인화성 물질이 포함된 일부 제품은 제외).

수입 관세 정보

복잡한 절차는 저희가 처리해 드리겠습니다. 카나리아 제도를 제외한 모든 EU 국가, 영국, 미국, 캐나다, 호주, 뉴질랜드, 푸에르토리코, 스위스, 싱가포르, 한국, 쿠웨이트, 멕시코, 카타르, 인도, 노르웨이, 사우디아라비아, 대만, 태국, UAE, 일본, 브라질, 맨섬, 산마리노, 콜롬비아, 칠레, 아르헨티나, 이집트, 레바논, 홍콩 특별행정구, 바레인, 터키로 배송되는 상품 가격에는 모든 수입 관세가 포함되어 있습니다. 표시된 가격이 최종 결제 금액입니다.

예상 배송 시간

특급 배송: 7-10일

해당 창고에서 배송되었습니다.

더 자세한 정보가 필요하신가요?
반품 정책
반품 정책

저희는 제품 품질에 자신 있습니다. 만약 구매하신 제품에 완전히 만족하지 못하신다면, 수령일로부터 30일 이내에 간편하게 반품하실 수 있도록 정책을 마련해 두었습니다.

간편한 교환 또는 환불
  • 다른 사이즈, 색상 또는 스타일의 상품으로 교환하거나 전액 환불받을 수 있습니다.
  • 반품되는 모든 상품은 착용하지 않은 상태여야 하며, 원래 포장 상태 그대로 모든 태그가 손상되지 않은 상태여야 합니다.
간단한 절차
  • 온라인으로 반품을 신청하거나 고객 서비스 팀에 문의하여 도움을 받으실 수 있습니다.
  • 물품을 제대로 포장해 주시고, 원래의 포장 목록도 함께 넣어주세요.
  • 반품하실 상품을 보내주실 때에는 저희가 제공해 드린 선불 배송 라벨을 사용해 주시기 바랍니다.
  • 환불 요청 접수 즉시 처리해 드리겠습니다.

반품과 관련하여 궁금한 점이나 우려 사항이 있으시면 언제든지 저희 전문 고객 서비스 팀에 문의해 주세요. 고객 만족은 저희의 최우선 목표입니다.

범주
차이
제품
모든 제품 지우기