[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]

/tech/ - Technical SEO

Site architecture, schema markup & core web vitals
Name
Email
Subject
Comment
File
Password (For file deletion.)
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

File: 1772394004576.jpg (35.18 KB, 1080x721, img_1772393995403_na7o3m7u.jpg)ImgOps Exif Google Yandex

57a7a No.1285[Reply]

Google's new algorithm update has everyone buzzing - should you invest in XML sitemaps? Or is a URL parameter approach better?
XML vs Parameter: Which Way to Go for Your Technical SEO Needs
Both approaches have their pros and cons, but with the recent Google updates on crawling efficiency, some are questioning if they need both. Google recommends using an index. xml file alongside your robots. txt. But can you get away without it? Some experts argue that URL parameters offer a more flexible solution for dynamic content - no extra files to manage!
>Imagine this: Your site has thousands of blog posts with tags and categories, all dynamically generated.
>>URL params = ✅
>>>XML sitemap = ❌ (too much overhead)
Others point out the importance of structured data. If you're already using schema markup in your content for rich snippets or enhanced listings on search results pages.
>Schema. org is key!
>>>>If not done right, it's like sending mixed signals to Google.
>>><<Wrong sitemap = bad news
>
For those who want a quick win without much hassle: URL parameters are easier and faster. Just add the parameter in your robots. txt:
User-agent: *Allow:/tag/*Disallow://*

But for sites with complex structures, an XML or index. xml might still be necessary to ensure all pages get crawled.
My Take
I've been using both methods on different projects - URL params work great until you hit the 10K+ page mark. Then it's time to switch over and optimize your sitemap strategy for better indexing efficiency
So, what's working best in YOUR corner of technical SEO? Share below!

57a7a No.1286

File: 1772395634482.jpg (97.07 KB, 1880x1253, img_1772395619242_5i0b4c0h.jpg)ImgOps Exif Google Yandex

the whole sitemap debate feels a bit overblown, doesnt it? i mean, google's algorithms are smart enough to index pages without us manually creating and submitting site maps right? plus there was that update where they started ignoring xml sitemaps for certain types of content - makes you wonder how much weight these things actually hold nowadays. anyone have any concrete stats or examples showing a clear benefit from using them?



File: 1772349023157.jpg (122.17 KB, 1880x1249, img_1772349015587_r1gnyxy2.jpg)ImgOps Exif Google Yandex

d9e22 No.1283[Reply]

most devs use github copilot just for its smart autocomplete feature but its so much more than that! inside vscode, you can interact w/ cp using different modes depending on where u are at in your dev process.

ive been playing around and found the build-refine-verify workflow super useful ⚡ especially when working solo or collaborating remotely ❤️ anyone else trying out these advanced features? whats worked for ya?

anyone tried integrating it into a continuous integration pipeline yet to automate some of those repetitive coding tasks

article: https://dzone.com/articles/mastering-github-copilot-in-vs-code-ask-edit-agent

d9e22 No.1284

File: 1772357067512.jpg (172.01 KB, 1880x1245, img_1772357052064_0ms3t1wo.jpg)ImgOps Exif Google Yandex

>>1283
i'm still trying to wrap my head around how github copilot can impact technical seo workflows, especially with vs code integration ⚡any tips on where i should start looking for relevant extensions?



File: 1771564079308.jpg (200.43 KB, 1880x1253, img_1771564070811_n12a1bxi.jpg)ImgOps Exif Google Yandex

eb4da No.1243[Reply]

Is it time to switch from JSON-LD? Let's find out! Schema markup has been king for years, but with Google announcing major updates in JSON-LD vs Microdata & RDFa by the end of this decade, now is your chance to test drive other formats!
Why migrate early if you're not forced yet!
- SEO power: Better indexing and ranking potential
- ''User experience: Cleaner code without bloated tags
''But wait, some say JSON-LD wins in complexity. Let's run a real-world experiment:
1) Split your site into two sections:- One half with current schema markup (JSON-LD)
- The other using Microdata or RDFa
2) Monitor for 6 months:
> How do rich snippets look?
3) Use Google Search Console to compare metrics on both halves.
4) Share findings in this thread! Don't be afraid, it's a sandbox. Let the community decide: is JSON-LD still, or will Microdata/RDFa give us an unexpected boost?
Fingers crossed for some surprising results! Stay tuned and let's level up our schema game together next year when Google finally drops their final verdict on 2035.
// Example migration planif (year &gt;= &#039;21&#039;) {useSchema = &quot;JSON-LD&quot;;} else if(year &#039;96&#039; || isMicrodata) {use Schema=&quot;microData&quot;;}

Bonus: Share your own experiences and any pre-migration tips in the comments!

eb4da No.1244

File: 1771564680430.jpg (239.98 KB, 1080x720, img_1771564664683_7c2ctz83.jpg)ImgOps Exif Google Yandex

i reckon many are overthinking that 2035 migration some just need to take a deep breath and check out those schema updates step by step maybe even try it on staging first ⚡ then boom, you're good! no biggie. nah gotta admit i did the same thing when starting w/ structured data - thought there was magic involved but rly its all about following directions

7af8f No.1282

File: 1772314299021.jpg (90.23 KB, 1880x1253, img_1772314282908_g0en44wd.jpg)ImgOps Exif Google Yandex

i was digging through some old logs from 2035 and found this gem: schema. org updates werent a one-way street, companies had to migrate their internal systems too it's like seo all over again but for your backend! ⚡

anyway, if you're stuck on where to start or what tools can help with the migration, check out these resources:

- official schema. org docs: always a good starting point
- google's structured data testing tool : super handy
- and dont forget about those webmaster forums for community support



File: 1772312586428.jpg (128.02 KB, 1080x565, img_1772312576375_w0iz1tcz.jpg)ImgOps Exif Google Yandex

248de No.1280[Reply]

if you're looking to give search engines a clear roadmap of what each page on your site is all about without relying solely on text content (which can be ambiguous), schema markup might just save some headaches. Here's why and how.
First, let's talk benefits: Schema. org structured data helps crawlers understand the context behind different elements like reviews, recipes, events - basically anything that could use a bit more clarity in terms of what it is to users who find your site via search results or social media shares. However, there's always room for common mistakes. One biggie: not testing thoroughly before deployment can lead you down the path where Google flags issues and penalizes content.
So, how do we avoid that? Simple - use Google Structured Data Testing Tool. It's free! Input your HTML or URL to see if everything is shipshape.
>Imagine deploying schema markup on a new e-commerce site without testing. Weeks later you realize Google has issues with ratings and reviews not displaying properly, leading potential customers right past the opportunity.
Here's an example of rich snippets from product pages:
&lt;script type=&quot;application/ld+json&quot;&gt;{&quot;@context&quot;: &quot;&quot;@&quot;type&quot;: &quot;Product&quot;.}&lt;/script&gt;

Implementing this for various content types can drastically improve your click-through rates and user experience. After all, a picture (or in our case schema markup) is worth more than thousands of words.
Pro Tip: Regularly revisit existing pages to ensure their data remains relevant as products change or categories evolve.
Don't forget - keeping things fresh isn't just for content; it's also about metadata and structured information

248de No.1281

File: 1772314096169.jpg (133.33 KB, 1880x1253, img_1772314081190_bzl55ybn.jpg)ImgOps Exif Google Yandex

i had this one site with a ton of schema for events and products, but it wasnt indexing properly i thought adding more would help. wrong! ended up making things worse till google finally gave me an error message that pointed out my mistakes. lesson: less is often better when its clean & correct ⚡



File: 1772233483418.jpg (84.62 KB, 1080x720, img_1772233476277_facvdvc9.jpg)ImgOps Exif Google Yandex

e6656 No.1277[Reply]

imagine you have a big pile of legos ⬆️. amazon web services is that giant box full o' pieces - servers, databases, networks - and more .
now heres the twist: terraform cant talk to AWS directly it needs some help - a translator if u will - to understand aws language and enter - the amazing aws provider ! ⭐

full read: https://dzone.com/articles/terraform-aws-provider-explained


File: 1772190818555.jpg (236.18 KB, 1080x720, img_1772190810180_6ywwqzpq.jpg)ImgOps Exif Google Yandex

3b4df No.1275[Reply]

we got hit hard by that wake-up call last year. our team rushed to implement AI features and didnt really think abt pricing until it was too late. my finance buddy flagged an openai bill over five grand per month - yikes! the real issue wasnt just how much, but we had no clue where all those dollars were going.

we realized that tracking usage is key - w/o visibility into what our ai models are doing and when theyre running wild (or not), its tough to optimize. so heres a quick rundown of some changes:

1) set up cost alerts : got notified every time the budget was near or exceeded.
2) use managed services instead: switched from raw api calls where we could, using providers like aws bedrock that handle costs more predictably and give you better control over usage patterns.
3) batch processing for repetitive tasks - saved a ton by running everything in one go rather than hitting the API multiple times.
4) automate monitoring scripts: set up some basic bash/bash script to log requests, response time etc, so we could see what was going on under-the-hood.

results? our costs dropped 70% without any noticeable difference. totally worth it for better control and predictability!

what tricks have you used when dealing with ai api cost overruns?
⬇️ give your tips in the comments!

more here: https://dzone.com/articles/cut-ai-api-costs-by-70-without-sacrificing-quality

3b4df No.1276

File: 1772190934064.jpg (93.08 KB, 1880x1253, img_1772190919739_oemr0og3.jpg)ImgOps Exif Google Yandex

>>1275
to cut ai api costs without sacrificing quality, consider implementing a caching strategy for frequently accessed data e. g, using redis to store API responses with an expiration time based on staleness criteria This reduces redundant requests and leverages local storage efficiency. Also, prioritize content thats less dynamic or doesnt require real-time updates ⬆

edit: i was wrong i was differently correct



File: 1772154415658.jpg (187.44 KB, 1880x1250, img_1772154407512_p0n88hez.jpg)ImgOps Exif Google Yandex

ad827 No.1274[Reply]

both terms sound similar but serve different purposes in our tech stack. crawling is like a spider navigating through urls, discovering new pages as it goes ⬆️. on the flip side, scraping focuses more directly on extracting data from those discovered sites ─ think of it almost literally scooping out juicy content .

for many projects or tools that rely heavily on web traffic analysis and automation (like automated bots), picking one over another can make a world of difference. as the bad bot report shows, in 2024 they represented an impressive 51% share ─ up from around half last year ⚡.

so when you're building your next big project or optimizing for SEO and SEM (search engine marketing), consider this: do i need a thorough exploration of new pages (crawling),or should the focus be on extracting meaningful data points that could give me an edge in my market?

i'm curious, what are some scenarios where you've seen one method work better than another? any tips or pitfalls to share from your experience?
⬇️

full read: https://dev.to/yasser_sami/web-scraping-vs-web-crawling-whats-the-difference-and-when-to-use-each-4a1c


File: 1772111492730.jpg (101.54 KB, 736x981, img_1772111484490_mrnywck4.jpg)ImgOps Exif Google Yandex

4687f No.1272[Reply]

i've been diving into coding for blink's upcoming adventures this week it kicks off in just a few days - episodes hit thursdays as usual. head over to our youtube channel and give us some love with likes, comments or even an emoji ⭐ if you can subscribing is free too! makes the adventure bigger.

anyone else feeling behind on last-minute coding before launch? i'm definitely there have any tips for staying organized during crunch times?
keep it slick & streamlined , that's my motto.

article: https://dev.to/linkbenjamin/journal-of-a-half-committed-vibe-coder-l3p

4687f No.1273

File: 1772111608679.jpg (161.76 KB, 1080x720, img_1772111593851_jsx8akl2.jpg)ImgOps Exif Google Yandex

>>1272
i'm still wrapping my head around schema. org for local business listings anyone got a good resource to share? maybe some common pitfalls i should avoid when implementing it on our site ⚡



File: 1772074881934.jpg (61.11 KB, 1880x1253, img_1772074871466_5s4ctjem.jpg)ImgOps Exif Google Yandex

d4d8e No.1270[Reply]

schema. org is dead. long live json-ld!
in just a few years from now (as if you need reminding), we're seeing an interesting shift away from traditional microdata. google, bing and yahoo all favor the use of
JSON-LD
, making it easier for developers to implement schema markup without cluttering up their html.
but here's where things get spicy: with more robust apis available in json format , why stick solely on page-level info? imagine a world where your server dynamically generates rich snippets based real-time data. the possibilities are endless! ⚛️
so instead of manually adding schema to every post or product:
&lt;div itemscope itemtype=&quot;&gt;&lt;span itemprop=&#039;name&#039;&gt;Spaghetti Carbonara&lt;/code&gt;[/div]Why not let your backend handle it?Dynamic JSON-LD from API calls! [code]= {&quot;@context&quot;: &quot;.}

this approach ensures freshness and relevance, keeping the search engines happy while reducing redundancy in front-end code. win-win!
what are your thoughts on this evolution? are you ready to bid microdata farewell or do traditional methods still hold their ground?
>Are there any downsides I'm missing here?
Let's discuss!

d4d8e No.1271

File: 1772076058177.jpg (181.22 KB, 1080x720, img_1772076042320_wd67te4z.jpg)ImgOps Exif Google Yandex

>>1270
in 2019, i was trying to optimize a client's site for schema markups and ran into an issue with nested itemprop attributes on dynamic pages generated by their CMS it seemed like every guide said "just add more markup" but no one mentioned the performance hit or how complex things could get when dealing with multiple levels of content. i ended up writing custom scripts to dynamically generate structured data based off server-side variables, which saved a ton in terms of page load time ⚡ turns out google was ok with that approach as long as it didnt break their algorithms ♂️



File: 1772031961864.jpg (103.6 KB, 1080x720, img_1772031953589_iirghxh9.jpg)ImgOps Exif Google Yandex

a76fe No.1269[Reply]

gavriel cohen dropped this bomb over the weekend after he found some serious flaws in openclaw. with nano claws release came minimal code and maximum isolation, making it an instant hit among security enthusiasts

i wonder how many projects will make the switch? have you tried out both yet?

isolation is key here!

more here: https://thenewstack.io/nanoclaw-minimalist-ai-agents/


Delete Post [ ]
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
| Catalog
[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]
. "http://www.w3.org/TR/html4/strict.dtd">